Nov 24 21:07:06 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 21:07:06 crc restorecon[4697]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:06 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:07:07 crc restorecon[4697]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 21:07:08 crc kubenswrapper[4801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:07:08 crc kubenswrapper[4801]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 21:07:08 crc kubenswrapper[4801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:07:08 crc kubenswrapper[4801]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:07:08 crc kubenswrapper[4801]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 21:07:08 crc kubenswrapper[4801]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.376799 4801 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386241 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386271 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386281 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386289 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386298 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386307 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386315 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386323 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386334 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386344 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386353 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386361 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386393 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386404 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386414 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386424 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386432 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386441 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386451 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386461 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386469 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386478 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386485 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386494 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386502 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386510 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386518 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386526 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386534 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386546 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386555 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386563 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386571 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386580 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386587 4801 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386596 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386603 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386611 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386620 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386628 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386635 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386643 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386651 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386658 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386666 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386673 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386680 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386688 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386696 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386703 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386711 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386718 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386725 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386733 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386740 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386748 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386756 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386767 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386774 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386782 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386789 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386797 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386806 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386814 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386822 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386830 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386837 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386844 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386851 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386859 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.386867 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387818 4801 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387838 4801 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387852 4801 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387863 4801 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387874 4801 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387884 4801 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387894 4801 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387905 4801 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387914 4801 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387923 4801 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387933 4801 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387942 4801 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387951 4801 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387960 4801 flags.go:64] FLAG: --cgroup-root="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387969 4801 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387978 4801 flags.go:64] FLAG: --client-ca-file="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387987 4801 flags.go:64] FLAG: --cloud-config="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.387996 4801 flags.go:64] FLAG: --cloud-provider="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388005 4801 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388017 4801 flags.go:64] FLAG: --cluster-domain="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388027 4801 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388037 4801 flags.go:64] FLAG: --config-dir="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388045 4801 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388055 4801 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388066 4801 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388075 4801 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388084 4801 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388093 4801 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388103 4801 flags.go:64] FLAG: --contention-profiling="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388112 4801 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388121 4801 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388130 4801 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388139 4801 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388149 4801 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388159 4801 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388168 4801 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388177 4801 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388186 4801 flags.go:64] FLAG: --enable-server="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388195 4801 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388205 4801 flags.go:64] FLAG: --event-burst="100" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388214 4801 flags.go:64] FLAG: --event-qps="50" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388223 4801 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388232 4801 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388241 4801 flags.go:64] FLAG: --eviction-hard="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388251 4801 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388259 4801 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388268 4801 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388277 4801 flags.go:64] FLAG: --eviction-soft="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388287 4801 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388295 4801 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388304 4801 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388314 4801 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388322 4801 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388331 4801 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388340 4801 flags.go:64] FLAG: --feature-gates="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388350 4801 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388359 4801 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388394 4801 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388403 4801 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388413 4801 flags.go:64] FLAG: --healthz-port="10248" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388422 4801 flags.go:64] FLAG: --help="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388431 4801 flags.go:64] FLAG: --hostname-override="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388439 4801 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388448 4801 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388458 4801 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388467 4801 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388475 4801 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388484 4801 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388493 4801 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388501 4801 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388510 4801 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388519 4801 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388528 4801 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388536 4801 flags.go:64] FLAG: --kube-reserved="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388545 4801 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388553 4801 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388562 4801 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388571 4801 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388580 4801 flags.go:64] FLAG: --lock-file="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388588 4801 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388599 4801 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388608 4801 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388622 4801 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388630 4801 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388639 4801 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388648 4801 flags.go:64] FLAG: --logging-format="text" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388656 4801 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388666 4801 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388675 4801 flags.go:64] FLAG: --manifest-url="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388683 4801 flags.go:64] FLAG: --manifest-url-header="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388694 4801 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388703 4801 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388714 4801 flags.go:64] FLAG: --max-pods="110" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388723 4801 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388732 4801 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388740 4801 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388749 4801 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388758 4801 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388768 4801 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388777 4801 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388796 4801 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388805 4801 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388814 4801 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388823 4801 flags.go:64] FLAG: --pod-cidr="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388831 4801 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388844 4801 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388852 4801 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388861 4801 flags.go:64] FLAG: --pods-per-core="0" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388871 4801 flags.go:64] FLAG: --port="10250" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388880 4801 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388888 4801 flags.go:64] FLAG: --provider-id="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388897 4801 flags.go:64] FLAG: --qos-reserved="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388906 4801 flags.go:64] FLAG: --read-only-port="10255" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388915 4801 flags.go:64] FLAG: --register-node="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388924 4801 flags.go:64] FLAG: --register-schedulable="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388933 4801 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388947 4801 flags.go:64] FLAG: --registry-burst="10" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388955 4801 flags.go:64] FLAG: --registry-qps="5" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388964 4801 flags.go:64] FLAG: --reserved-cpus="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388973 4801 flags.go:64] FLAG: --reserved-memory="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388983 4801 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.388992 4801 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389001 4801 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389009 4801 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389018 4801 flags.go:64] FLAG: --runonce="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389027 4801 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389036 4801 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389045 4801 flags.go:64] FLAG: --seccomp-default="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389053 4801 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389062 4801 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389072 4801 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389080 4801 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389089 4801 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389098 4801 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389107 4801 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389115 4801 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389125 4801 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389134 4801 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389143 4801 flags.go:64] FLAG: --system-cgroups="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389152 4801 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389165 4801 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389174 4801 flags.go:64] FLAG: --tls-cert-file="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389182 4801 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389730 4801 flags.go:64] FLAG: --tls-min-version="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389750 4801 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389763 4801 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389776 4801 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389794 4801 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.389806 4801 flags.go:64] FLAG: --v="2" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.390233 4801 flags.go:64] FLAG: --version="false" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.390254 4801 flags.go:64] FLAG: --vmodule="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.390284 4801 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.390300 4801 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391107 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391130 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391145 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391172 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391188 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391203 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391218 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391231 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391243 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391261 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391277 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391290 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391301 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391312 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391321 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391346 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391357 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391410 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391431 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391442 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391458 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391582 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391609 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391623 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391633 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391643 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391653 4801 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391662 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391670 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391678 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391686 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391695 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391703 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391711 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391727 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391736 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391743 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391753 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391761 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391769 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391777 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391786 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391805 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.391813 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392160 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392177 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392188 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392198 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392208 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392216 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392226 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392234 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392246 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392256 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392266 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392275 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392285 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392297 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392307 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392315 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392324 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392332 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392343 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392352 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392360 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392397 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392406 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392414 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392422 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392430 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.392439 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.392453 4801 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.405158 4801 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.405198 4801 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405391 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405410 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405420 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405430 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405439 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405447 4801 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405456 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405464 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405475 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405486 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405496 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405505 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405527 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405538 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405548 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405558 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405567 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405575 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405586 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405596 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405604 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405613 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405622 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405630 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405639 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405649 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405658 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405668 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405676 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405685 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405695 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405703 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405712 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405720 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405729 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405737 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405746 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405754 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405762 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405770 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405778 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405789 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405798 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405806 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405815 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405824 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405832 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405841 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405849 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405857 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405865 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405873 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405881 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405890 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405897 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405905 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405913 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405920 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405928 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405935 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405943 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405950 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405958 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405966 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405973 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405981 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405989 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.405997 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406004 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406012 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406020 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.406033 4801 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406252 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406263 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406272 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406283 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406292 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406300 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406308 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406316 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406325 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406334 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406342 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406350 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406358 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406388 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406398 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406406 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406414 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406422 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406433 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406443 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406452 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406460 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406468 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406476 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406485 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406493 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406501 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406510 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406517 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406526 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406533 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406541 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406549 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406556 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406564 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406572 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406579 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406587 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406594 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406602 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406610 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406618 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406625 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406633 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406640 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406648 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406656 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406664 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406671 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406679 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406687 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406694 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406702 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406710 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406720 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406730 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406738 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406746 4801 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406753 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406763 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406773 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406781 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406789 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406800 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406810 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406818 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406827 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406837 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406845 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406853 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.406861 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.406873 4801 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.407067 4801 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.413346 4801 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.413499 4801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.415638 4801 server.go:997] "Starting client certificate rotation" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.415685 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.417149 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 13:54:28.838908962 +0000 UTC Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.417326 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.444112 4801 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.448149 4801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.448547 4801 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.467491 4801 log.go:25] "Validated CRI v1 runtime API" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.509599 4801 log.go:25] "Validated CRI v1 image API" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.511625 4801 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.520265 4801 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-21-02-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.520305 4801 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.541616 4801 manager.go:217] Machine: {Timestamp:2025-11-24 21:07:08.538264443 +0000 UTC m=+0.620851153 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:19e68446-b369-4df2-90ee-d6f4eb03379d BootID:ca2cf578-585b-4133-99fc-dae8e3b13777 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:34:88:84 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:34:88:84 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0f:5d:68 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1d:7f:2c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:18:e7:d7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b3:75:bc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:73:c0:29:04:9b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:d6:45:64:02:30 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.541915 4801 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.542142 4801 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.542659 4801 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.542907 4801 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.542951 4801 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.545471 4801 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.545510 4801 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.546073 4801 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.546115 4801 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.547246 4801 state_mem.go:36] "Initialized new in-memory state store" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.547378 4801 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.554207 4801 kubelet.go:418] "Attempting to sync node with API server" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.554242 4801 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.554271 4801 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.554288 4801 kubelet.go:324] "Adding apiserver pod source" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.554306 4801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.560296 4801 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.561863 4801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.563207 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.563344 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.563217 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.563464 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.564567 4801 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566522 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566565 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566580 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566594 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566617 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566630 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566644 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566664 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566681 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566695 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.566712 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.567427 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.569095 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.569793 4801 server.go:1280] "Started kubelet" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.571046 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.571411 4801 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.571434 4801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 21:07:08 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.572471 4801 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.573885 4801 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.573956 4801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.574158 4801 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:08:15.510809433 +0000 UTC Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.574217 4801 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 309h1m6.936596725s for next certificate rotation Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.574561 4801 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.574570 4801 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.574601 4801 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.574588 4801 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.575471 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.575699 4801 server.go:460] "Adding debug handlers to kubelet server" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.575731 4801 factory.go:55] Registering systemd factory Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.575759 4801 factory.go:221] Registration of the systemd container factory successfully Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.575705 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.585800 4801 factory.go:153] Registering CRI-O factory Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.585854 4801 factory.go:221] Registration of the crio container factory successfully Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.585855 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.83:6443: connect: connection refused" interval="200ms" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.585955 4801 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.585997 4801 factory.go:103] Registering Raw factory Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.586158 4801 manager.go:1196] Started watching for new ooms in manager Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.588234 4801 manager.go:319] Starting recovery of all containers Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.587540 4801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.83:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b0d6a2424aae5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 21:07:08.569758437 +0000 UTC m=+0.652345137,LastTimestamp:2025-11-24 21:07:08.569758437 +0000 UTC m=+0.652345137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.599938 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600052 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600071 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600088 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600101 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600116 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600128 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600139 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600155 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600168 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600185 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600198 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600221 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600238 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600253 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600268 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600281 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600295 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600307 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600320 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600332 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600345 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600357 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600391 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600404 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600417 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600434 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600448 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600461 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600473 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600494 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600507 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600520 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600532 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600546 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600560 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600572 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600585 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600600 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600614 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600627 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600639 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600651 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600666 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600681 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600694 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600710 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600724 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600737 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600751 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600763 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600775 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600792 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600805 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600819 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600833 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600847 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600859 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600871 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600885 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600897 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600910 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600923 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600938 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600951 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600964 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600978 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.600993 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601011 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601029 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601042 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601056 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601070 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601084 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601096 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601110 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601127 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601140 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601153 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601167 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601179 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601192 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601203 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601215 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601229 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601241 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601255 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601268 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601280 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601294 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601306 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601319 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601331 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601343 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601354 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601389 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601400 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601412 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601424 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601468 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601480 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601492 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601503 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601515 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601530 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601543 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601558 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601571 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601589 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601603 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601618 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601631 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601644 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601658 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601670 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601683 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601695 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601707 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601719 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601755 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601766 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601778 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601791 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601802 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601839 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601853 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601867 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601880 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601894 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601913 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601932 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601951 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601964 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601979 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.601997 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602016 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602032 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602045 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602060 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602074 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602087 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602100 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602115 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602128 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602140 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602153 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602165 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602178 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602191 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602204 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602215 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602227 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602239 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602251 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602262 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602276 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602290 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602304 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602318 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602333 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602552 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602594 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602629 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602661 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602690 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602720 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602748 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602776 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602809 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602838 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602866 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602893 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602921 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.602951 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.604965 4801 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605036 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605073 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605095 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605116 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605135 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605168 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605188 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605229 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605257 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605283 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605308 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605332 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605358 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605427 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605492 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605522 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605591 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605630 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605661 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605689 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605720 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605750 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605778 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605803 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605826 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605851 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605875 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605900 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605923 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605948 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605973 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.605998 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.606023 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.606052 4801 reconstruct.go:97] "Volume reconstruction finished" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.606070 4801 reconciler.go:26] "Reconciler: start to sync state" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.618448 4801 manager.go:324] Recovery completed Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.639527 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.642264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.642347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.642441 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.644525 4801 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.644558 4801 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.644589 4801 state_mem.go:36] "Initialized new in-memory state store" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.660148 4801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.662507 4801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.662564 4801 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.662609 4801 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.662684 4801 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 21:07:08 crc kubenswrapper[4801]: W1124 21:07:08.665850 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.666038 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.672699 4801 policy_none.go:49] "None policy: Start" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.673768 4801 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.673797 4801 state_mem.go:35] "Initializing new in-memory state store" Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.675153 4801 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.739778 4801 manager.go:334] "Starting Device Plugin manager" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.739853 4801 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.739873 4801 server.go:79] "Starting device plugin registration server" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.740507 4801 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.740535 4801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.740806 4801 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.740898 4801 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.740908 4801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.756320 4801 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.763604 4801 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.763676 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.764730 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.764766 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.764780 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.764905 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.765164 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.765232 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.765904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.765976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766339 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766492 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766542 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766489 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.766678 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.767916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.767959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.767977 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.769211 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.769251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.769262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.769473 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.769667 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.769730 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.770194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.770228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.770277 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.770473 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.770609 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.770654 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771215 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771687 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771705 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771900 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.771917 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.772315 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.772360 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.773580 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.773615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.773628 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.786786 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.83:6443: connect: connection refused" interval="400ms" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809494 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809542 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809655 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809683 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809714 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809740 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809766 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809790 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.809920 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.810019 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.810087 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.810132 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.810165 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.810214 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.810246 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.843056 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.845204 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.845260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.845279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.845320 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:08 crc kubenswrapper[4801]: E1124 21:07:08.846319 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.83:6443: connect: connection refused" node="crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912098 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912157 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912196 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912261 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912292 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912323 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912326 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912354 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912847 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912964 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913020 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913065 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913079 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913107 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912973 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913095 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.912334 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913165 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913214 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913263 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913283 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913318 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913335 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.913637 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.914402 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:08 crc kubenswrapper[4801]: I1124 21:07:08.914518 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.046599 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.048604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.048675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.048695 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.048743 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.049688 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.83:6443: connect: connection refused" node="crc" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.101760 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.124278 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.155064 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.182116 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.187599 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-041119b18470ce750345aeabba99eaf89cd18b88ccf6f0054daaa73b56f3d77e WatchSource:0}: Error finding container 041119b18470ce750345aeabba99eaf89cd18b88ccf6f0054daaa73b56f3d77e: Status 404 returned error can't find the container with id 041119b18470ce750345aeabba99eaf89cd18b88ccf6f0054daaa73b56f3d77e Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.188703 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.83:6443: connect: connection refused" interval="800ms" Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.190227 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7123150c0a882f114e9b44dd0771c0d1e35be894d2571fcb0b50e329f8a550b1 WatchSource:0}: Error finding container 7123150c0a882f114e9b44dd0771c0d1e35be894d2571fcb0b50e329f8a550b1: Status 404 returned error can't find the container with id 7123150c0a882f114e9b44dd0771c0d1e35be894d2571fcb0b50e329f8a550b1 Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.190451 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.201996 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1881e7d0b5ff93a5070077ad3a9febb142ece3b844745cdc1af91106074995c5 WatchSource:0}: Error finding container 1881e7d0b5ff93a5070077ad3a9febb142ece3b844745cdc1af91106074995c5: Status 404 returned error can't find the container with id 1881e7d0b5ff93a5070077ad3a9febb142ece3b844745cdc1af91106074995c5 Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.204055 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-650c82ec80f2f610e8ee2fadab72819259799be8efc2344c45f588270980f921 WatchSource:0}: Error finding container 650c82ec80f2f610e8ee2fadab72819259799be8efc2344c45f588270980f921: Status 404 returned error can't find the container with id 650c82ec80f2f610e8ee2fadab72819259799be8efc2344c45f588270980f921 Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.210876 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6d304b34ed35dc6dee794d243ff0c7dbb10fbcfffab1fbee417db4d12d0ea39a WatchSource:0}: Error finding container 6d304b34ed35dc6dee794d243ff0c7dbb10fbcfffab1fbee417db4d12d0ea39a: Status 404 returned error can't find the container with id 6d304b34ed35dc6dee794d243ff0c7dbb10fbcfffab1fbee417db4d12d0ea39a Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.385999 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.386167 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.450018 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.451189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.451225 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.451236 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.451262 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.451897 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.83:6443: connect: connection refused" node="crc" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.572957 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.602056 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.602225 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.667778 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6d304b34ed35dc6dee794d243ff0c7dbb10fbcfffab1fbee417db4d12d0ea39a"} Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.669254 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"650c82ec80f2f610e8ee2fadab72819259799be8efc2344c45f588270980f921"} Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.670691 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1881e7d0b5ff93a5070077ad3a9febb142ece3b844745cdc1af91106074995c5"} Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.672004 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"041119b18470ce750345aeabba99eaf89cd18b88ccf6f0054daaa73b56f3d77e"} Nov 24 21:07:09 crc kubenswrapper[4801]: I1124 21:07:09.674891 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7123150c0a882f114e9b44dd0771c0d1e35be894d2571fcb0b50e329f8a550b1"} Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.696846 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.696951 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:09 crc kubenswrapper[4801]: W1124 21:07:09.975249 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.975852 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:09 crc kubenswrapper[4801]: E1124 21:07:09.989629 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.83:6443: connect: connection refused" interval="1.6s" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.252597 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.254488 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.254528 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.254540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.254567 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:10 crc kubenswrapper[4801]: E1124 21:07:10.254972 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.83:6443: connect: connection refused" node="crc" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.572424 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.647300 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 21:07:10 crc kubenswrapper[4801]: E1124 21:07:10.649094 4801 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.680514 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc" exitCode=0 Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.680690 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.680648 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.682188 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.682256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.682296 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.686127 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.686775 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.686819 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.686835 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.686847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.686878 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.687720 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.687770 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.687789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.688396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.688556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.688760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.691617 4801 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea" exitCode=0 Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.691736 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.692154 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.693972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.694021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.694042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.694916 4801 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="23e455e08f1a5c84dacae789a715a4f08bcc672c206b3c046f7d29ff1c533fa8" exitCode=0 Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.695026 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"23e455e08f1a5c84dacae789a715a4f08bcc672c206b3c046f7d29ff1c533fa8"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.695057 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.696941 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.697010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.697042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.698159 4801 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e" exitCode=0 Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.698190 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e"} Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.698279 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.699590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.699637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:10 crc kubenswrapper[4801]: I1124 21:07:10.699656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:11 crc kubenswrapper[4801]: W1124 21:07:11.112967 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:11 crc kubenswrapper[4801]: E1124 21:07:11.113084 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.572916 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:11 crc kubenswrapper[4801]: E1124 21:07:11.591153 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.83:6443: connect: connection refused" interval="3.2s" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.705386 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.705438 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.705447 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.705457 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.707694 4801 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d" exitCode=0 Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.707780 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.707910 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.708826 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.708883 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.708895 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.711245 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.711927 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3ae37346c8bae8b96761493a414a2cbf06a1a68d95adee9c7580a4866fe34c5c"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.712572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.712587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.712604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.715208 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.715751 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.715776 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.715787 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1"} Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.716862 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.716915 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.716925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.717039 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.718306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.718330 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.718341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.736031 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.858842 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.859933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.859967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.859976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:11 crc kubenswrapper[4801]: I1124 21:07:11.860000 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:11 crc kubenswrapper[4801]: E1124 21:07:11.860221 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.83:6443: connect: connection refused" node="crc" Nov 24 21:07:12 crc kubenswrapper[4801]: W1124 21:07:12.197247 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.83:6443: connect: connection refused Nov 24 21:07:12 crc kubenswrapper[4801]: E1124 21:07:12.197401 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.83:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.724062 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.724156 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e"} Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.726185 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.726236 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.726255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729101 4801 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a" exitCode=0 Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729232 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729242 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729308 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729394 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a"} Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729422 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.729307 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.730297 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.730336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.730354 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.730448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.730487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.730515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.731161 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.731263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.731320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.731438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.731485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.731504 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:12 crc kubenswrapper[4801]: I1124 21:07:12.771581 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.738116 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122"} Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.738256 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb"} Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.738278 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.738289 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79"} Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.738197 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.738498 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.740067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.740073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.740140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.740168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.740173 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.740196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:13 crc kubenswrapper[4801]: I1124 21:07:13.781700 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.054705 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.747127 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879"} Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.747194 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.747203 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e"} Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.747227 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.748686 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.748733 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.748751 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.748789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.748825 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.748842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.822269 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.822613 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.824357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.824453 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.824472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:14 crc kubenswrapper[4801]: I1124 21:07:14.953040 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.060898 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.062870 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.062947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.062966 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.063009 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.509312 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.510020 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.511733 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.511794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.511813 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.517290 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.749707 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.749917 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.749972 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.751212 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.751262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.751283 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.752023 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.752080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.752101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.752096 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.752389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.752428 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.772348 4801 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:07:15 crc kubenswrapper[4801]: I1124 21:07:15.772479 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:07:16 crc kubenswrapper[4801]: I1124 21:07:16.067196 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:16 crc kubenswrapper[4801]: I1124 21:07:16.752333 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:16 crc kubenswrapper[4801]: I1124 21:07:16.753933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:16 crc kubenswrapper[4801]: I1124 21:07:16.754011 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:16 crc kubenswrapper[4801]: I1124 21:07:16.754035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:17 crc kubenswrapper[4801]: I1124 21:07:17.554072 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:17 crc kubenswrapper[4801]: I1124 21:07:17.554525 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:17 crc kubenswrapper[4801]: I1124 21:07:17.556774 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:17 crc kubenswrapper[4801]: I1124 21:07:17.556837 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:17 crc kubenswrapper[4801]: I1124 21:07:17.556858 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.620347 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.620709 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.626853 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.626966 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.627027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:18 crc kubenswrapper[4801]: E1124 21:07:18.756551 4801 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.905564 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.906264 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.908171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.908252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:18 crc kubenswrapper[4801]: I1124 21:07:18.908282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:21 crc kubenswrapper[4801]: I1124 21:07:21.743645 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:21 crc kubenswrapper[4801]: I1124 21:07:21.743879 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:21 crc kubenswrapper[4801]: I1124 21:07:21.745889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:21 crc kubenswrapper[4801]: I1124 21:07:21.745964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:21 crc kubenswrapper[4801]: I1124 21:07:21.745983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:22 crc kubenswrapper[4801]: W1124 21:07:22.433596 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.433727 4801 trace.go:236] Trace[899955124]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:07:12.431) (total time: 10001ms): Nov 24 21:07:22 crc kubenswrapper[4801]: Trace[899955124]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:07:22.433) Nov 24 21:07:22 crc kubenswrapper[4801]: Trace[899955124]: [10.001995589s] [10.001995589s] END Nov 24 21:07:22 crc kubenswrapper[4801]: E1124 21:07:22.433771 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.572851 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:07:22 crc kubenswrapper[4801]: W1124 21:07:22.689770 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.690259 4801 trace.go:236] Trace[1584223544]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:07:12.688) (total time: 10001ms): Nov 24 21:07:22 crc kubenswrapper[4801]: Trace[1584223544]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:07:22.689) Nov 24 21:07:22 crc kubenswrapper[4801]: Trace[1584223544]: [10.001612876s] [10.001612876s] END Nov 24 21:07:22 crc kubenswrapper[4801]: E1124 21:07:22.690292 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.773130 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.776499 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e" exitCode=255 Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.776574 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e"} Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.776791 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.777840 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.777870 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.777884 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:22 crc kubenswrapper[4801]: I1124 21:07:22.778483 4801 scope.go:117] "RemoveContainer" containerID="82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.358231 4801 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.358310 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.364251 4801 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.364319 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.781760 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.785581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075"} Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.785793 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.787072 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.787158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:23 crc kubenswrapper[4801]: I1124 21:07:23.787189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:24 crc kubenswrapper[4801]: I1124 21:07:24.055634 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:24 crc kubenswrapper[4801]: I1124 21:07:24.791770 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:24 crc kubenswrapper[4801]: I1124 21:07:24.793876 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:24 crc kubenswrapper[4801]: I1124 21:07:24.793950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:24 crc kubenswrapper[4801]: I1124 21:07:24.793970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:25 crc kubenswrapper[4801]: I1124 21:07:25.772438 4801 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:07:25 crc kubenswrapper[4801]: I1124 21:07:25.772894 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 21:07:26 crc kubenswrapper[4801]: I1124 21:07:26.188979 4801 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 21:07:26 crc kubenswrapper[4801]: I1124 21:07:26.373661 4801 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.564774 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.564981 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.566496 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.566545 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.566562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.572535 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.801586 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.803275 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.803341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:27 crc kubenswrapper[4801]: I1124 21:07:27.803355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.352920 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.355692 4801 trace.go:236] Trace[326078751]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:07:18.114) (total time: 10240ms): Nov 24 21:07:28 crc kubenswrapper[4801]: Trace[326078751]: ---"Objects listed" error: 10240ms (21:07:28.355) Nov 24 21:07:28 crc kubenswrapper[4801]: Trace[326078751]: [10.240849374s] [10.240849374s] END Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.355760 4801 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.356180 4801 trace.go:236] Trace[134679900]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:07:16.567) (total time: 11788ms): Nov 24 21:07:28 crc kubenswrapper[4801]: Trace[134679900]: ---"Objects listed" error: 11788ms (21:07:28.355) Nov 24 21:07:28 crc kubenswrapper[4801]: Trace[134679900]: [11.788340217s] [11.788340217s] END Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.356256 4801 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.356881 4801 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.359609 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.370729 4801 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.565948 4801 apiserver.go:52] "Watching apiserver" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.569208 4801 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.569684 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.570241 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.570392 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.570358 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.571106 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.571282 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.571473 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.571505 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.571737 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.571826 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.574496 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.575240 4801 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.577432 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.577662 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.577663 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.577768 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.578117 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.578443 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.578509 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.580444 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.604325 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.619133 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.635695 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.647555 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.657443 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658339 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658401 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658423 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658439 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658457 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658478 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658497 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658521 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658542 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658559 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658579 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658597 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658647 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658759 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658780 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658798 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658815 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658836 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658852 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658894 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658913 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658932 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658950 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658968 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.658998 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659014 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659029 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659045 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659063 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659081 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659145 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659162 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659178 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659197 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659222 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659249 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659271 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659310 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659331 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659348 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659379 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659397 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659412 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659414 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659428 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659447 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659466 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659483 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659500 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659519 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659536 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659555 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659574 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659590 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659594 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659608 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659625 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659643 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659681 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659699 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659716 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659732 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659750 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659767 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659783 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659798 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659832 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659849 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659866 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659888 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659910 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659935 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659952 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659970 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659989 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660007 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660024 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660040 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660056 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660072 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660090 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660107 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660122 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660141 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660161 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660179 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660196 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660220 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660237 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660253 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660271 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660307 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660323 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660357 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660408 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660429 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660445 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660463 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660480 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660502 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660520 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660539 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660556 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660574 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662560 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662638 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662680 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662723 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662759 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662792 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662831 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662876 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662908 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662942 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662981 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663007 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663037 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663065 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663088 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663114 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663143 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663176 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663202 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663230 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663258 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663280 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663312 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663384 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663412 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663441 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663466 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663490 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663515 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663559 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663589 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663612 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663631 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663653 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663678 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663710 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663734 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663758 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663787 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663812 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663837 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663862 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663886 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663910 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663941 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663965 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663987 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664013 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664039 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664060 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664082 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664107 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664138 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664161 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664186 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664213 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664233 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664261 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664286 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664311 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664357 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664397 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664420 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664444 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664471 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664492 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.669681 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659878 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.659981 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.660348 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662036 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662256 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662425 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.662476 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663130 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663306 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663671 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663895 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.663960 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664113 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664269 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664433 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.664530 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:07:29.164492713 +0000 UTC m=+21.247079383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.677694 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664685 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664798 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.664851 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.665103 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.665298 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.665478 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.665971 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.666332 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.666719 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667012 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667267 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667667 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667682 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667811 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667938 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.668827 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.669233 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.669801 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.669900 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670067 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670255 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670289 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670552 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670578 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670647 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670938 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.670967 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.671289 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.671498 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.671679 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.671945 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.672210 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.672710 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.673404 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.673745 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.673974 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.674349 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.674502 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.667505 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.674990 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.675206 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.675661 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.676012 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.676395 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.676736 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.676915 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.676977 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.676976 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.677288 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.677443 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.680017 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.680152 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.683812 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.677833 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.677736 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.686498 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.686529 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.686766 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687008 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687096 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687437 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687473 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687600 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687675 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.687876 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.688202 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.688409 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.688454 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.688778 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.688990 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689139 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689181 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689222 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689614 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689903 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689952 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.689987 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.690323 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.690620 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.690712 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.690784 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.690894 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.690937 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.691215 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.691386 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.691402 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.691683 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.691721 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.691862 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692173 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692193 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692253 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692144 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.695663 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692631 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692671 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.692790 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.693182 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.693221 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.693358 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.693622 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.693812 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.693864 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.694189 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.694185 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.694277 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.694536 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.695938 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.694548 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.694802 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.695031 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.695214 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.695480 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.696174 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.695780 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.696470 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.696568 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.696764 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.697028 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.697164 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.697432 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.697879 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.698089 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.698343 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.698773 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.699295 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.699739 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.699834 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.699945 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.700144 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.700500 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.701112 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.701542 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.701673 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.701705 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.701943 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.702063 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.702286 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.702337 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.702596 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.702885 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703109 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703255 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703354 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703286 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703300 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703448 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703531 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703556 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703576 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703863 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.703999 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704265 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704298 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704322 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704344 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704433 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704453 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704474 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704491 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704506 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704524 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704543 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704563 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704624 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704698 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704732 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704755 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704780 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704803 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704822 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704841 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704865 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704905 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704926 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704949 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.704965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.705070 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.705989 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706002 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706013 4801 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706023 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706036 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706048 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706060 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706073 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706083 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706093 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706103 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706114 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706125 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706136 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706145 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706155 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706165 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706174 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706185 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706195 4801 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706210 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706219 4801 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706230 4801 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706240 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706250 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706258 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706267 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706277 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706287 4801 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706296 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706306 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706338 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706348 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706357 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706381 4801 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706393 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706402 4801 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706414 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706424 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706433 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706442 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706452 4801 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706462 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706473 4801 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706483 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706494 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706503 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706515 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706524 4801 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706537 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706547 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706558 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706567 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706577 4801 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706586 4801 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706595 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706604 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706614 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706622 4801 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706631 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706655 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706664 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706672 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706681 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706690 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706699 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706707 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706716 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706725 4801 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706735 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706745 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706753 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706762 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706772 4801 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706781 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706791 4801 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706801 4801 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706810 4801 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706820 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706830 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706840 4801 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706850 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706860 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706870 4801 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706880 4801 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706892 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706902 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706911 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706939 4801 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706950 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706960 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706970 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706980 4801 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706994 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707003 4801 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707013 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707023 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707032 4801 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707046 4801 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707059 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707070 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707081 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707092 4801 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707104 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707117 4801 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707130 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707138 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707146 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707156 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707164 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707173 4801 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707182 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707191 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707200 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707209 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707218 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707226 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707237 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707245 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707254 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707262 4801 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707270 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707279 4801 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707288 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707297 4801 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707306 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707315 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707326 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707337 4801 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707345 4801 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707354 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707378 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707388 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707400 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707413 4801 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707424 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707438 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707451 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707459 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707471 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707482 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707490 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707500 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707509 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707518 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707528 4801 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707537 4801 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707545 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707554 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707563 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707572 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707581 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707590 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707599 4801 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707608 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707617 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707626 4801 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707637 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707647 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707656 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.707665 4801 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.705528 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.705703 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.705963 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706198 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.706829 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.708136 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.711127 4801 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.711586 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.711882 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.711965 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:29.211940211 +0000 UTC m=+21.294526881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.712059 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.712053 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.712089 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:29.212083005 +0000 UTC m=+21.294669675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.713806 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.713909 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.714089 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.714858 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.717437 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.719486 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.719650 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.719683 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.719839 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.719832 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.721251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.721592 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.722795 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.724799 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.726389 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.726552 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.726599 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.730188 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.731482 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.732623 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.733312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.733782 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.737643 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.737760 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.738822 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.739427 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.742502 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.744659 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.744701 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.744719 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.744803 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:29.244775648 +0000 UTC m=+21.327362518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.744804 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.747985 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.749646 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.750771 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.750797 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.750812 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:28 crc kubenswrapper[4801]: E1124 21:07:28.750852 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:29.250840187 +0000 UTC m=+21.333426877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.751744 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.753007 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.755126 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.767093 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.767749 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.769129 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.773649 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.776812 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.778699 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.778749 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.779501 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.782926 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.783522 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.784114 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.791415 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.800131 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.800866 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.802713 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.804023 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.804466 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.805053 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808779 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808836 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808906 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808919 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808929 4801 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808940 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808949 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808960 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808970 4801 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808979 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808989 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.808998 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809008 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809019 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809029 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809041 4801 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809052 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809061 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809070 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809079 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809089 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809098 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809107 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809116 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809124 4801 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809133 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809143 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809158 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809168 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809177 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809186 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809195 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809247 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.809301 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.819423 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.819945 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.820996 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.821829 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.823416 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.824138 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.824864 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.825968 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.827769 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.828330 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.828586 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.829324 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.829803 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.830264 4801 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.830384 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.833341 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.834591 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.836013 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.837714 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.838413 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.839308 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.839940 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.840983 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.841455 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.842081 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.843052 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.843983 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.844562 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.845463 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.845960 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.847163 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.847872 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.848819 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.849271 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.849784 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.849766 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.850770 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.851226 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.852121 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.852153 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.852530 4801 csr.go:261] certificate signing request csr-46zwq is approved, waiting to be issued Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.881779 4801 csr.go:257] certificate signing request csr-46zwq is issued Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.884184 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.889702 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.895500 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.900847 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.904562 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.910213 4801 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.910248 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.922675 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.940961 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.958488 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:28 crc kubenswrapper[4801]: I1124 21:07:28.991163 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.012622 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.026897 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.036415 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.049290 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.059640 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.073611 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.118216 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.133925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.148345 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.160101 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.212394 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.212502 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.212527 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.212606 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.212662 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:07:30.212622354 +0000 UTC m=+22.295209024 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.212706 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:30.212696026 +0000 UTC m=+22.295282696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.212777 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.212865 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:30.212844411 +0000 UTC m=+22.295431081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.267849 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-w5rck"] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.268292 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.270700 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.270903 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.271545 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.306392 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.313249 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrh8\" (UniqueName: \"kubernetes.io/projected/7e27b0cc-3de6-48f9-9b49-89c0ba6264df-kube-api-access-7lrh8\") pod \"node-resolver-w5rck\" (UID: \"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\") " pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.313306 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7e27b0cc-3de6-48f9-9b49-89c0ba6264df-hosts-file\") pod \"node-resolver-w5rck\" (UID: \"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\") " pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.313338 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.313382 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313548 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313570 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313584 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313634 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:30.31361603 +0000 UTC m=+22.396202700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313549 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313716 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313734 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.313797 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:30.313770534 +0000 UTC m=+22.396357224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.319215 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.329037 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.343549 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.354203 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.368984 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.384245 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.395521 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.413887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrh8\" (UniqueName: \"kubernetes.io/projected/7e27b0cc-3de6-48f9-9b49-89c0ba6264df-kube-api-access-7lrh8\") pod \"node-resolver-w5rck\" (UID: \"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\") " pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.413930 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7e27b0cc-3de6-48f9-9b49-89c0ba6264df-hosts-file\") pod \"node-resolver-w5rck\" (UID: \"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\") " pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.414023 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7e27b0cc-3de6-48f9-9b49-89c0ba6264df-hosts-file\") pod \"node-resolver-w5rck\" (UID: \"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\") " pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.514878 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrh8\" (UniqueName: \"kubernetes.io/projected/7e27b0cc-3de6-48f9-9b49-89c0ba6264df-kube-api-access-7lrh8\") pod \"node-resolver-w5rck\" (UID: \"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\") " pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.585733 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w5rck" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.599865 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e27b0cc_3de6_48f9_9b49_89c0ba6264df.slice/crio-7ff1a552b0cbea14276adf8b8c5ce9d166ecdad31774e47fc8f3cfdbbf39c8db WatchSource:0}: Error finding container 7ff1a552b0cbea14276adf8b8c5ce9d166ecdad31774e47fc8f3cfdbbf39c8db: Status 404 returned error can't find the container with id 7ff1a552b0cbea14276adf8b8c5ce9d166ecdad31774e47fc8f3cfdbbf39c8db Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.663064 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.663264 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.682575 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7c69f"] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.683406 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.683975 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gdjvp"] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.684432 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.686687 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mnfsp"] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.687119 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.691210 4801 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.691287 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.691412 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.695139 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.695189 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.695751 4801 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.695797 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.695895 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.695935 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.698214 4801 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.698265 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.698333 4801 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.698348 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.698443 4801 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.698458 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.698531 4801 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.698546 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: W1124 21:07:29.698587 4801 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.698599 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.707854 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrqff"] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.708788 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.715975 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-system-cni-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716010 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-kubelet\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716028 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-hostroot\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716049 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716070 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-etc-kubernetes\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716101 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-os-release\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716121 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-conf-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716140 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-cni-bin\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716156 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-cni-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716174 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce526e40-8920-4d1a-adfe-a7149eed9a11-rootfs\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716192 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxw6\" (UniqueName: \"kubernetes.io/projected/7ad310fd-a52d-4270-9403-4b40769c580e-kube-api-access-vrxw6\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716293 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7xr\" (UniqueName: \"kubernetes.io/projected/ce526e40-8920-4d1a-adfe-a7149eed9a11-kube-api-access-8t7xr\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716387 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce526e40-8920-4d1a-adfe-a7149eed9a11-proxy-tls\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716455 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphqx\" (UniqueName: \"kubernetes.io/projected/5f348c59-5453-436a-bcce-548bdef22a27-kube-api-access-wphqx\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716492 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-netns\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716520 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-multus-certs\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716541 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-cni-multus\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716566 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-cnibin\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716597 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-system-cni-dir\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716621 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-os-release\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716647 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-multus-daemon-config\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716670 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-cnibin\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716693 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716723 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716742 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-k8s-cni-cncf-io\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce526e40-8920-4d1a-adfe-a7149eed9a11-mcd-auth-proxy-config\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716790 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-cni-binary-copy\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.716815 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-socket-dir-parent\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.733601 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.733705 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.733906 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.734080 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.734096 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.734164 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.734325 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.789683 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.810656 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w5rck" event={"ID":"7e27b0cc-3de6-48f9-9b49-89c0ba6264df","Type":"ContainerStarted","Data":"7ff1a552b0cbea14276adf8b8c5ce9d166ecdad31774e47fc8f3cfdbbf39c8db"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.813652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.813732 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"17bce86f431c52627b90baad976e3f6dc8522c96d02f7495b239f6a90bf5e5b8"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.816737 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817239 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817404 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7xr\" (UniqueName: \"kubernetes.io/projected/ce526e40-8920-4d1a-adfe-a7149eed9a11-kube-api-access-8t7xr\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817445 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce526e40-8920-4d1a-adfe-a7149eed9a11-proxy-tls\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817477 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-netns\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817499 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-env-overrides\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817521 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-log-socket\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817593 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-netd\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817626 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphqx\" (UniqueName: \"kubernetes.io/projected/5f348c59-5453-436a-bcce-548bdef22a27-kube-api-access-wphqx\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817649 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-netns\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817672 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-multus-certs\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817693 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-var-lib-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817716 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-config\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817744 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-cnibin\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817762 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-cni-multus\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817781 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817797 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-bin\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817817 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-os-release\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817834 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-multus-daemon-config\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817853 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-system-cni-dir\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-systemd\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817891 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-ovn\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-cnibin\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817954 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.817977 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-etc-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818003 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovn-node-metrics-cert\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818030 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-cni-binary-copy\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-socket-dir-parent\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818080 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-k8s-cni-cncf-io\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818098 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce526e40-8920-4d1a-adfe-a7149eed9a11-mcd-auth-proxy-config\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818119 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-slash\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818144 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-kubelet\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818165 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-hostroot\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818182 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-script-lib\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-system-cni-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818221 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-node-log\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818244 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818262 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-os-release\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-etc-kubernetes\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818295 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-kubelet\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818316 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-systemd-units\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818338 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxzz\" (UniqueName: \"kubernetes.io/projected/6757adc4-e0f2-49a6-8320-29cb96e4a10f-kube-api-access-mnxzz\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818355 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-conf-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818392 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-cni-bin\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818411 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxw6\" (UniqueName: \"kubernetes.io/projected/7ad310fd-a52d-4270-9403-4b40769c580e-kube-api-access-vrxw6\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818427 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-cni-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818450 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce526e40-8920-4d1a-adfe-a7149eed9a11-rootfs\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.818574 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce526e40-8920-4d1a-adfe-a7149eed9a11-rootfs\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-cnibin\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-cni-multus\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819353 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-os-release\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819355 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-netns\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819440 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-multus-certs\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819467 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-cnibin\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819473 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-system-cni-dir\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819511 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-system-cni-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-conf-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819825 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-cni-dir\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819862 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-etc-kubernetes\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819918 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-os-release\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819942 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-cni-bin\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819974 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-multus-socket-dir-parent\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.819995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-var-lib-kubelet\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.820019 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-host-run-k8s-cni-cncf-io\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.820038 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5f348c59-5453-436a-bcce-548bdef22a27-hostroot\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.820233 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ad310fd-a52d-4270-9403-4b40769c580e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.820236 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce526e40-8920-4d1a-adfe-a7149eed9a11-mcd-auth-proxy-config\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.830046 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075" exitCode=255 Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.830171 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.830236 4801 scope.go:117] "RemoveContainer" containerID="82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.830876 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce526e40-8920-4d1a-adfe-a7149eed9a11-proxy-tls\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.840146 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.840907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f33264717d8afc8515f2e25ed2895c520010f7fd8e35d2997c51ec730e21cc27"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.849678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7xr\" (UniqueName: \"kubernetes.io/projected/ce526e40-8920-4d1a-adfe-a7149eed9a11-kube-api-access-8t7xr\") pod \"machine-config-daemon-mnfsp\" (UID: \"ce526e40-8920-4d1a-adfe-a7149eed9a11\") " pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.850394 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.850448 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.850460 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c1ab4711f84ebc678eb95c1f45fe606ad8725676c697a5b8595f25fdd8c9c07"} Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.855172 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.855431 4801 scope.go:117] "RemoveContainer" containerID="06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075" Nov 24 21:07:29 crc kubenswrapper[4801]: E1124 21:07:29.855669 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.858343 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.875734 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.883030 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-24 21:02:28 +0000 UTC, rotation deadline is 2026-10-12 09:33:44.864487586 +0000 UTC Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.883091 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7716h26m14.981398506s for next certificate rotation Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.885178 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.899304 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.916139 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919454 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-systemd\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919492 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-ovn\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919545 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovn-node-metrics-cert\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-etc-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919601 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-ovn\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919612 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-slash\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919680 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-etc-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919722 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-slash\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919737 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-script-lib\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919794 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-node-log\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919847 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-kubelet\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919881 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-systemd-units\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919887 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-node-log\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919938 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-kubelet\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919956 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxzz\" (UniqueName: \"kubernetes.io/projected/6757adc4-e0f2-49a6-8320-29cb96e4a10f-kube-api-access-mnxzz\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.919974 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-systemd-units\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920086 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-netns\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920137 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-log-socket\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920172 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920209 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-env-overrides\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920272 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920293 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-log-socket\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920316 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-netd\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920352 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-netns\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920349 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-var-lib-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-var-lib-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920524 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-script-lib\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920600 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-netd\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920411 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-config\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920628 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920658 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920679 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-bin\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920776 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-openvswitch\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920783 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-bin\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.920842 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-env-overrides\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.921461 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.921579 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-config\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.921682 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-systemd\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.922902 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovn-node-metrics-cert\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.931268 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.940084 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxzz\" (UniqueName: \"kubernetes.io/projected/6757adc4-e0f2-49a6-8320-29cb96e4a10f-kube-api-access-mnxzz\") pod \"ovnkube-node-jrqff\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.946864 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.958968 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.971019 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:29 crc kubenswrapper[4801]: I1124 21:07:29.983606 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.013507 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.028749 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.042526 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.064852 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.078011 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.087535 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: W1124 21:07:30.092737 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6757adc4_e0f2_49a6_8320_29cb96e4a10f.slice/crio-0ebcecc12d6be61054df541cbff7e74a2293b0beb8007195dd7ffab9eb10d4c5 WatchSource:0}: Error finding container 0ebcecc12d6be61054df541cbff7e74a2293b0beb8007195dd7ffab9eb10d4c5: Status 404 returned error can't find the container with id 0ebcecc12d6be61054df541cbff7e74a2293b0beb8007195dd7ffab9eb10d4c5 Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.126136 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:22Z\\\",\\\"message\\\":\\\"W1124 21:07:11.969462 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:07:11.969845 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764018431 cert, and key in /tmp/serving-cert-1360001925/serving-signer.crt, /tmp/serving-cert-1360001925/serving-signer.key\\\\nI1124 21:07:12.214699 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:07:12.222580 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:07:12.222809 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:07:12.225120 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1360001925/tls.crt::/tmp/serving-cert-1360001925/tls.key\\\\\\\"\\\\nF1124 21:07:22.651823 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.144841 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.163765 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.180474 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.223616 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.223849 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:07:32.223803789 +0000 UTC m=+24.306390459 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.224140 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.224206 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.224338 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.224432 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:32.224413408 +0000 UTC m=+24.307000088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.224638 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.224813 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:32.224799799 +0000 UTC m=+24.307386649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.325269 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.325344 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325500 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325539 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325551 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325555 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325584 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325595 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325609 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:32.325592959 +0000 UTC m=+24.408179629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.325640 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:32.32562631 +0000 UTC m=+24.408212980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.561067 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.568504 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:07:30 crc kubenswrapper[4801]: W1124 21:07:30.580217 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce526e40_8920_4d1a_adfe_a7149eed9a11.slice/crio-f89bd8344c67de1a6d10f339fef0f5a9595e74a247be4a25a51acb0d73540484 WatchSource:0}: Error finding container f89bd8344c67de1a6d10f339fef0f5a9595e74a247be4a25a51acb0d73540484: Status 404 returned error can't find the container with id f89bd8344c67de1a6d10f339fef0f5a9595e74a247be4a25a51acb0d73540484 Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.663678 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.663877 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.664632 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.664811 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.667960 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.669071 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.816504 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819669 4801 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819700 4801 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819719 4801 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819743 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-multus-daemon-config podName:5f348c59-5453-436a-bcce-548bdef22a27 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:31.319721038 +0000 UTC m=+23.402307708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-multus-daemon-config") pod "multus-gdjvp" (UID: "5f348c59-5453-436a-bcce-548bdef22a27") : failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819841 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-sysctl-allowlist podName:7ad310fd-a52d-4270-9403-4b40769c580e nodeName:}" failed. No retries permitted until 2025-11-24 21:07:31.319812811 +0000 UTC m=+23.402399471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-7c69f" (UID: "7ad310fd-a52d-4270-9403-4b40769c580e") : failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819855 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-binary-copy podName:7ad310fd-a52d-4270-9403-4b40769c580e nodeName:}" failed. No retries permitted until 2025-11-24 21:07:31.319848612 +0000 UTC m=+23.402435282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-binary-copy") pod "multus-additional-cni-plugins-7c69f" (UID: "7ad310fd-a52d-4270-9403-4b40769c580e") : failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.819927 4801 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.820062 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-cni-binary-copy podName:5f348c59-5453-436a-bcce-548bdef22a27 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:31.320030058 +0000 UTC m=+23.402616768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-cni-binary-copy") pod "multus-gdjvp" (UID: "5f348c59-5453-436a-bcce-548bdef22a27") : failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.841205 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.850279 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.854473 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027" exitCode=0 Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.854554 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027"} Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.854587 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"0ebcecc12d6be61054df541cbff7e74a2293b0beb8007195dd7ffab9eb10d4c5"} Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.859734 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.867129 4801 scope.go:117] "RemoveContainer" containerID="06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075" Nov 24 21:07:30 crc kubenswrapper[4801]: E1124 21:07:30.867318 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.869404 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065"} Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.869442 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"f89bd8344c67de1a6d10f339fef0f5a9595e74a247be4a25a51acb0d73540484"} Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.871335 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w5rck" event={"ID":"7e27b0cc-3de6-48f9-9b49-89c0ba6264df","Type":"ContainerStarted","Data":"f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568"} Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.879703 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.893845 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.921693 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.954286 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:30 crc kubenswrapper[4801]: I1124 21:07:30.993835 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.012416 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.039515 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.074838 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.081397 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.084434 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.094954 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxw6\" (UniqueName: \"kubernetes.io/projected/7ad310fd-a52d-4270-9403-4b40769c580e-kube-api-access-vrxw6\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.098842 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphqx\" (UniqueName: \"kubernetes.io/projected/5f348c59-5453-436a-bcce-548bdef22a27-kube-api-access-wphqx\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.104028 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.129168 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82eba3186c5efd67152330a971d0a2f3704e4cb74941df59e5db0ae94808250e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:22Z\\\",\\\"message\\\":\\\"W1124 21:07:11.969462 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:07:11.969845 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764018431 cert, and key in /tmp/serving-cert-1360001925/serving-signer.crt, /tmp/serving-cert-1360001925/serving-signer.key\\\\nI1124 21:07:12.214699 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:07:12.222580 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:07:12.222809 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:07:12.225120 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1360001925/tls.crt::/tmp/serving-cert-1360001925/tls.key\\\\\\\"\\\\nF1124 21:07:22.651823 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.144267 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.162092 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.190473 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.206845 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.228805 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.242034 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.257648 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.274823 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.288189 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.308261 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.326897 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.335055 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.335145 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-multus-daemon-config\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.335179 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.335208 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-cni-binary-copy\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.336100 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-cni-binary-copy\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.336159 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.336205 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ad310fd-a52d-4270-9403-4b40769c580e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c69f\" (UID: \"7ad310fd-a52d-4270-9403-4b40769c580e\") " pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.336285 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5f348c59-5453-436a-bcce-548bdef22a27-multus-daemon-config\") pod \"multus-gdjvp\" (UID: \"5f348c59-5453-436a-bcce-548bdef22a27\") " pod="openshift-multus/multus-gdjvp" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.346925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.372963 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.392833 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.412794 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.425729 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.436679 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.501129 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7c69f" Nov 24 21:07:31 crc kubenswrapper[4801]: W1124 21:07:31.512854 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ad310fd_a52d_4270_9403_4b40769c580e.slice/crio-c6766ac5315325e16d257533b984e1e5f804b71968c21d33a672ec4707f174f7 WatchSource:0}: Error finding container c6766ac5315325e16d257533b984e1e5f804b71968c21d33a672ec4707f174f7: Status 404 returned error can't find the container with id c6766ac5315325e16d257533b984e1e5f804b71968c21d33a672ec4707f174f7 Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.515404 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gdjvp" Nov 24 21:07:31 crc kubenswrapper[4801]: W1124 21:07:31.572382 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f348c59_5453_436a_bcce_548bdef22a27.slice/crio-05115941e2417d59e5922886a7420038501a8b98ba196892d4c59c213ada0fa6 WatchSource:0}: Error finding container 05115941e2417d59e5922886a7420038501a8b98ba196892d4c59c213ada0fa6: Status 404 returned error can't find the container with id 05115941e2417d59e5922886a7420038501a8b98ba196892d4c59c213ada0fa6 Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.664344 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:31 crc kubenswrapper[4801]: E1124 21:07:31.664728 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.881972 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerStarted","Data":"9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.882539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerStarted","Data":"c6766ac5315325e16d257533b984e1e5f804b71968c21d33a672ec4707f174f7"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.891000 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.895718 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.902177 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.902260 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.902280 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.902295 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.902308 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.902321 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.904674 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerStarted","Data":"31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.904721 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerStarted","Data":"05115941e2417d59e5922886a7420038501a8b98ba196892d4c59c213ada0fa6"} Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.910572 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.926543 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.944636 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.959587 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.978500 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:31 crc kubenswrapper[4801]: I1124 21:07:31.998084 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.011761 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.023954 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.039892 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.061297 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.076316 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nqbj8"] Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.076984 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.078128 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: W1124 21:07:32.078735 4801 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.078784 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:32 crc kubenswrapper[4801]: W1124 21:07:32.079021 4801 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.079047 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.080226 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.080247 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.096399 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.109375 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.134198 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.143951 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f40abc6-b686-4090-b952-f36cbf4fb47f-serviceca\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.143992 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l4ww\" (UniqueName: \"kubernetes.io/projected/2f40abc6-b686-4090-b952-f36cbf4fb47f-kube-api-access-2l4ww\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.144055 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f40abc6-b686-4090-b952-f36cbf4fb47f-host\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.149408 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.161742 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.176203 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.190851 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.205421 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.221251 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.234082 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245098 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245233 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f40abc6-b686-4090-b952-f36cbf4fb47f-host\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245270 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.245391 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:07:36.245325785 +0000 UTC m=+28.327912455 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245405 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f40abc6-b686-4090-b952-f36cbf4fb47f-host\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.245425 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.245538 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:36.245529041 +0000 UTC m=+28.328115711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245516 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f40abc6-b686-4090-b952-f36cbf4fb47f-serviceca\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245591 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l4ww\" (UniqueName: \"kubernetes.io/projected/2f40abc6-b686-4090-b952-f36cbf4fb47f-kube-api-access-2l4ww\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.245694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.245861 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.245955 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:36.245932083 +0000 UTC m=+28.328518753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.246447 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f40abc6-b686-4090-b952-f36cbf4fb47f-serviceca\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.254237 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.266495 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.280571 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.297749 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.315239 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.335562 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.346449 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.346511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346666 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346686 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346697 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346721 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346766 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346784 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346749 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:36.346731593 +0000 UTC m=+28.429318263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.346882 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:36.346860227 +0000 UTC m=+28.429446897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.663353 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.663673 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.664113 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.664260 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.777919 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.782206 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.792458 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.807514 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.824652 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.840654 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.858792 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.875167 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.888243 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.902515 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.911697 4801 generic.go:334] "Generic (PLEG): container finished" podID="7ad310fd-a52d-4270-9403-4b40769c580e" containerID="9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446" exitCode=0 Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.911817 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerDied","Data":"9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446"} Nov 24 21:07:32 crc kubenswrapper[4801]: E1124 21:07:32.923084 4801 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.925239 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.942158 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.968096 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.969773 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 21:07:32 crc kubenswrapper[4801]: I1124 21:07:32.987521 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.003323 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.015165 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.027772 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.042027 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.056641 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.067793 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.089736 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.103931 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.119981 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.138462 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.151088 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.165780 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.179280 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.199546 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.237566 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: E1124 21:07:33.260176 4801 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:33 crc kubenswrapper[4801]: E1124 21:07:33.260245 4801 projected.go:194] Error preparing data for projected volume kube-api-access-2l4ww for pod openshift-image-registry/node-ca-nqbj8: failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:33 crc kubenswrapper[4801]: E1124 21:07:33.260333 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f40abc6-b686-4090-b952-f36cbf4fb47f-kube-api-access-2l4ww podName:2f40abc6-b686-4090-b952-f36cbf4fb47f nodeName:}" failed. No retries permitted until 2025-11-24 21:07:33.760306473 +0000 UTC m=+25.842893153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2l4ww" (UniqueName: "kubernetes.io/projected/2f40abc6-b686-4090-b952-f36cbf4fb47f-kube-api-access-2l4ww") pod "node-ca-nqbj8" (UID: "2f40abc6-b686-4090-b952-f36cbf4fb47f") : failed to sync configmap cache: timed out waiting for the condition Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.287227 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.317175 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.360757 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.506943 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.663111 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:33 crc kubenswrapper[4801]: E1124 21:07:33.663313 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.763722 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l4ww\" (UniqueName: \"kubernetes.io/projected/2f40abc6-b686-4090-b952-f36cbf4fb47f-kube-api-access-2l4ww\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.772403 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l4ww\" (UniqueName: \"kubernetes.io/projected/2f40abc6-b686-4090-b952-f36cbf4fb47f-kube-api-access-2l4ww\") pod \"node-ca-nqbj8\" (UID: \"2f40abc6-b686-4090-b952-f36cbf4fb47f\") " pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.921322 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerStarted","Data":"8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc"} Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.929051 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nqbj8" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.941597 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.967994 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:33 crc kubenswrapper[4801]: I1124 21:07:33.986536 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.014057 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.029829 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.042670 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.058686 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.081631 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.094551 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.112885 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.124604 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.138181 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.151755 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.165336 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.179644 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.419161 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.419793 4801 scope.go:117] "RemoveContainer" containerID="06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075" Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.419945 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.663785 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.664006 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.664038 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.664193 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.760556 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.763011 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.763077 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.763099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.763308 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.772289 4801 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.772843 4801 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.774310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.774355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.774389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.774409 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.774421 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.788880 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.794073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.794129 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.794140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.794163 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.794190 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.812175 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.823904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.823964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.823983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.824013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.824034 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.839140 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.844848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.844904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.844923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.844987 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.845021 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.865454 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.869387 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.869519 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.869597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.869672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.869742 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.891275 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: E1124 21:07:34.891700 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.894616 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.894680 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.894701 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.894732 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.894751 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.931011 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894"} Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.933416 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nqbj8" event={"ID":"2f40abc6-b686-4090-b952-f36cbf4fb47f","Type":"ContainerStarted","Data":"72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733"} Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.933836 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nqbj8" event={"ID":"2f40abc6-b686-4090-b952-f36cbf4fb47f","Type":"ContainerStarted","Data":"65204110afe7dcd9b38994b361f36155efe26a09e1dddc91544143c9be089032"} Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.936037 4801 generic.go:334] "Generic (PLEG): container finished" podID="7ad310fd-a52d-4270-9403-4b40769c580e" containerID="8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc" exitCode=0 Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.936159 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerDied","Data":"8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc"} Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.964131 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.980966 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:34Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.998040 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.998100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.998119 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.998147 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:34 crc kubenswrapper[4801]: I1124 21:07:34.998165 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:34Z","lastTransitionTime":"2025-11-24T21:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.019335 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.033879 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.051936 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.066447 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.096565 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.101614 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.101751 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.101829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.101905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.102001 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.109723 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.125028 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.139117 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.152095 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.167404 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.180061 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.194029 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.205212 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.205272 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.205286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.205309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.205322 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.208498 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.222174 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.255190 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.272179 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.293002 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.308562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.308640 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.308661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.308689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.308709 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.320937 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.351530 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.366909 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.386288 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.406595 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.411932 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.411994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.412017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.412048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.412069 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.426058 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.446735 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.465754 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.485926 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.501938 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.514672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.514722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.514735 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.514757 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.514771 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.515331 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.617872 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.617933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.617952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.617979 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.617997 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.663833 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:35 crc kubenswrapper[4801]: E1124 21:07:35.664010 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.720535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.720593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.720612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.720635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.720655 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.823774 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.823917 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.823941 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.823967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.823988 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.928159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.928221 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.928238 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.928264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.928282 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:35Z","lastTransitionTime":"2025-11-24T21:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.943438 4801 generic.go:334] "Generic (PLEG): container finished" podID="7ad310fd-a52d-4270-9403-4b40769c580e" containerID="c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065" exitCode=0 Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.943565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerDied","Data":"c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065"} Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.973411 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:35 crc kubenswrapper[4801]: I1124 21:07:35.995919 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.019760 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.031749 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.031802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.031814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.031836 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.031847 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.034965 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.063479 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.083922 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.099234 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.117405 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.136491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.136544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.136558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.136588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.136604 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.142888 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.159151 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.181154 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.201010 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.223704 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.239581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.239626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.239643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.239667 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.239680 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.239730 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.254828 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.297291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.297513 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:07:44.297470965 +0000 UTC m=+36.380057635 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.297589 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.297680 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.297797 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.297886 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:44.297863837 +0000 UTC m=+36.380450517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.297884 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.298042 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:44.298017041 +0000 UTC m=+36.380603721 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.343852 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.343913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.343926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.343955 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.343973 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.398356 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.398482 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398602 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398603 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398638 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398657 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398619 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398715 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398727 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:44.398706358 +0000 UTC m=+36.481293038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.398752 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:44.398740329 +0000 UTC m=+36.481327009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.447490 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.447561 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.447573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.447600 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.447614 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.552609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.552674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.552690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.552715 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.552732 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.655905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.655963 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.655982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.656001 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.656015 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.663319 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.663472 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.664938 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:36 crc kubenswrapper[4801]: E1124 21:07:36.665016 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.759882 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.759938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.759952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.759977 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.759994 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.863499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.863575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.863602 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.863639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.863666 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.955721 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.956489 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.959282 4801 generic.go:334] "Generic (PLEG): container finished" podID="7ad310fd-a52d-4270-9403-4b40769c580e" containerID="f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e" exitCode=0 Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.959342 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerDied","Data":"f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.967732 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.967792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.967815 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.967846 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.967871 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:36Z","lastTransitionTime":"2025-11-24T21:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:36 crc kubenswrapper[4801]: I1124 21:07:36.977201 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.000111 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.005469 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.029343 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.048207 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.070897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.070936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.070950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.070969 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.070987 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.076498 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.112836 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.130801 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.154951 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.182137 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.182206 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.182219 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.182244 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.182258 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.190161 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.232493 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.255114 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.270418 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.284670 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.284712 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.284720 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.284737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.284747 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.285584 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.299813 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.309923 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.325100 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.339925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.354906 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.372579 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.386933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.387001 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.387055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.387078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.387089 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.392751 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.411870 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.430376 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.461993 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.478020 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.490497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.490552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.490572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.490597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.490611 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.500764 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.519150 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.550703 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.568929 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.593511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.593572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.593595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.593617 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.593630 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.595022 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.611560 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.663102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:37 crc kubenswrapper[4801]: E1124 21:07:37.663286 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.697515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.697583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.697599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.697623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.697637 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.800867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.800916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.800926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.800946 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.800959 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.904830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.904927 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.904945 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.905526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.905592 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:37Z","lastTransitionTime":"2025-11-24T21:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.966692 4801 generic.go:334] "Generic (PLEG): container finished" podID="7ad310fd-a52d-4270-9403-4b40769c580e" containerID="c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca" exitCode=0 Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.966782 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerDied","Data":"c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca"} Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.966842 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.967255 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:37 crc kubenswrapper[4801]: I1124 21:07:37.984490 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:37Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.002396 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.005026 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.008862 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.008897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.008909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.008925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.008939 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.020471 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.032675 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.047750 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.064427 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.088140 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.106154 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.111526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.111575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.111592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.111612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.111628 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.131965 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.152921 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.166944 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.182983 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.214089 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.214148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.214164 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.214189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.214206 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.215074 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.226485 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.239103 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.251535 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.273418 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.298332 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.316503 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.317488 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.317536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.317552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.317571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.317585 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.334956 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.349260 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.376635 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.400335 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.416928 4801 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.418271 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.83:37738->38.102.83.83:6443: use of closed network connection" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.426047 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.426113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.426128 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.426142 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.426151 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.448123 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.466344 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.475860 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.489878 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.505890 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.518619 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.528686 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.528756 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.528775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.528801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.528819 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.631672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.632178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.632309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.632453 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.632527 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.663103 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.663173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:38 crc kubenswrapper[4801]: E1124 21:07:38.663662 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:38 crc kubenswrapper[4801]: E1124 21:07:38.663719 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.680826 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.701795 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.717264 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.733702 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.747416 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.747475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.747495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.747521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.747539 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.773678 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.787471 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.800076 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.813896 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.825002 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.837380 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.849464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.849508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.849522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.849542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.849558 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.860882 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.877132 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.891187 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.908473 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.922416 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.952515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.952574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.952587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.952608 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.952621 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:38Z","lastTransitionTime":"2025-11-24T21:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.978526 4801 generic.go:334] "Generic (PLEG): container finished" podID="7ad310fd-a52d-4270-9403-4b40769c580e" containerID="05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e" exitCode=0 Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.978614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerDied","Data":"05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e"} Nov 24 21:07:38 crc kubenswrapper[4801]: I1124 21:07:38.978730 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.000339 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.021179 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.035467 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.052771 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.055043 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.055118 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.055135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.055157 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.055172 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.072481 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.089441 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.104602 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.147995 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.158403 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.158457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.158478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.158498 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.158509 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.179642 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.220030 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.266317 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.266773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.266788 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.266811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.266827 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.273154 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.304341 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.337667 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.370739 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.370779 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.370792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.370812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.370826 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.383158 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.422578 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.473083 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.473130 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.473143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.473163 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.473180 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.576585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.576652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.576673 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.576705 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.576724 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.662937 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:39 crc kubenswrapper[4801]: E1124 21:07:39.663195 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.681171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.681257 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.681282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.681317 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.681343 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.785218 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.785284 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.785301 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.785327 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.785346 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.888728 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.888819 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.888840 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.888863 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.888880 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.991540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.991633 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.991658 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.991686 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.991706 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:39Z","lastTransitionTime":"2025-11-24T21:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:39 crc kubenswrapper[4801]: I1124 21:07:39.995414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" event={"ID":"7ad310fd-a52d-4270-9403-4b40769c580e","Type":"ContainerStarted","Data":"8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.000007 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/0.log" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.005361 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62" exitCode=1 Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.005503 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.006776 4801 scope.go:117] "RemoveContainer" containerID="0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.020342 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.038313 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.051971 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.074641 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.096246 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.096296 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.096310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.096331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.096346 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.097930 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.116923 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.133648 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.153906 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.168129 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.190114 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.199663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.199811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.199875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.199950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.200012 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.212783 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.231869 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.246913 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.275315 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.290469 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.303284 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.303347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.303359 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.303397 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.303413 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.314877 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.331940 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.346900 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.373413 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.387718 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.407167 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.407225 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.407240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.407264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.407279 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.412807 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.431226 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.449843 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.466862 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.491801 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"message\\\":\\\"1124 21:07:39.412069 6003 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:39.412074 6003 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:07:39.412085 6003 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:07:39.412101 6003 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:07:39.412126 6003 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:07:39.412134 6003 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:07:39.412161 6003 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:07:39.412160 6003 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:07:39.412175 6003 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:07:39.412182 6003 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:39.412214 6003 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:07:39.412209 6003 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:39.412252 6003 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:39.412258 6003 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:39.412336 6003 factory.go:656] Stopping watch factory\\\\nI1124 21:07:39.412356 6003 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.504611 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.510632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.510697 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.510721 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.510751 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.510775 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.523022 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.544786 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.583197 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.614810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.614881 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.614897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.614927 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.614947 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.622675 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:40Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.663329 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.663430 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:40 crc kubenswrapper[4801]: E1124 21:07:40.663532 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:40 crc kubenswrapper[4801]: E1124 21:07:40.663636 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.718822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.718906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.718923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.718973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.718992 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.821280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.821357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.821383 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.821401 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.821413 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.925054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.925315 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.925404 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.925480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:40 crc kubenswrapper[4801]: I1124 21:07:40.925543 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:40Z","lastTransitionTime":"2025-11-24T21:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.012079 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/0.log" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.017341 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.017505 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.028784 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.028826 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.028843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.028866 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.028878 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.032843 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.044421 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.058896 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.072200 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.089339 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.104868 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.116524 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.131274 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.131320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.131329 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.131351 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.131391 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.141997 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.158693 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.182909 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.201708 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.234292 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"message\\\":\\\"1124 21:07:39.412069 6003 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:39.412074 6003 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:07:39.412085 6003 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:07:39.412101 6003 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:07:39.412126 6003 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:07:39.412134 6003 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:07:39.412161 6003 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:07:39.412160 6003 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:07:39.412175 6003 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:07:39.412182 6003 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:39.412214 6003 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:07:39.412209 6003 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:39.412252 6003 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:39.412258 6003 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:39.412336 6003 factory.go:656] Stopping watch factory\\\\nI1124 21:07:39.412356 6003 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.234716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.234765 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.234783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.234813 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.234829 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.258400 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.277379 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.299355 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.337538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.337581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.337593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.337609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.337624 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.440912 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.440974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.440984 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.441003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.441016 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.543803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.543865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.543884 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.543908 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.543925 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.646981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.647051 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.647074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.647101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.647120 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.663644 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:41 crc kubenswrapper[4801]: E1124 21:07:41.663822 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.749844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.749948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.749958 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.749971 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.749981 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.849175 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776"] Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.849658 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.853885 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.853959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.853984 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.854016 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.854038 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.854135 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.854427 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.872568 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.888814 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.926160 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.942004 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.956678 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.956750 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.956773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.956803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.956827 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:41Z","lastTransitionTime":"2025-11-24T21:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.959160 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.963574 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77czl\" (UniqueName: \"kubernetes.io/projected/84a46587-3f32-40de-a806-a33918e9d29a-kube-api-access-77czl\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.963668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/84a46587-3f32-40de-a806-a33918e9d29a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.963765 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/84a46587-3f32-40de-a806-a33918e9d29a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.963921 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/84a46587-3f32-40de-a806-a33918e9d29a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:41 crc kubenswrapper[4801]: I1124 21:07:41.976167 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:41Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.013748 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"message\\\":\\\"1124 21:07:39.412069 6003 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:39.412074 6003 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:07:39.412085 6003 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:07:39.412101 6003 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:07:39.412126 6003 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:07:39.412134 6003 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:07:39.412161 6003 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:07:39.412160 6003 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:07:39.412175 6003 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:07:39.412182 6003 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:39.412214 6003 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:07:39.412209 6003 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:39.412252 6003 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:39.412258 6003 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:39.412336 6003 factory.go:656] Stopping watch factory\\\\nI1124 21:07:39.412356 6003 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.022759 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/1.log" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.023538 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/0.log" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.026473 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c" exitCode=1 Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.026520 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.026563 4801 scope.go:117] "RemoveContainer" containerID="0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.027306 4801 scope.go:117] "RemoveContainer" containerID="280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c" Nov 24 21:07:42 crc kubenswrapper[4801]: E1124 21:07:42.027512 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.037197 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.059973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.060032 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.060046 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.060071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.060083 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.061567 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.065068 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/84a46587-3f32-40de-a806-a33918e9d29a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.065141 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77czl\" (UniqueName: \"kubernetes.io/projected/84a46587-3f32-40de-a806-a33918e9d29a-kube-api-access-77czl\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.065211 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/84a46587-3f32-40de-a806-a33918e9d29a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.065260 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/84a46587-3f32-40de-a806-a33918e9d29a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.066388 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/84a46587-3f32-40de-a806-a33918e9d29a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.066455 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/84a46587-3f32-40de-a806-a33918e9d29a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.074226 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/84a46587-3f32-40de-a806-a33918e9d29a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.081443 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.087312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77czl\" (UniqueName: \"kubernetes.io/projected/84a46587-3f32-40de-a806-a33918e9d29a-kube-api-access-77czl\") pod \"ovnkube-control-plane-749d76644c-wb776\" (UID: \"84a46587-3f32-40de-a806-a33918e9d29a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.100041 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.118112 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.136691 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.152982 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.162844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.162879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.162889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.162906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.162919 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.168396 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.174284 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.181806 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.198562 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.212433 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.230554 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.251498 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.266592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.266628 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.266689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.267030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.267104 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.267186 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.292253 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.307080 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.322541 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.341792 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.356335 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.369723 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.369754 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.369767 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.369783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.369795 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.369905 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.401733 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.440834 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.473433 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.473495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.473518 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.473549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.473571 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.482604 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.523874 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.569467 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"message\\\":\\\"1124 21:07:39.412069 6003 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:39.412074 6003 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:07:39.412085 6003 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:07:39.412101 6003 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:07:39.412126 6003 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:07:39.412134 6003 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:07:39.412161 6003 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:07:39.412160 6003 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:07:39.412175 6003 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:07:39.412182 6003 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:39.412214 6003 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:07:39.412209 6003 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:39.412252 6003 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:39.412258 6003 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:39.412336 6003 factory.go:656] Stopping watch factory\\\\nI1124 21:07:39.412356 6003 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.576560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.576615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.576631 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.576650 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.576665 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.663925 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:42 crc kubenswrapper[4801]: E1124 21:07:42.664126 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.664679 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:42 crc kubenswrapper[4801]: E1124 21:07:42.664761 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.679612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.679675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.679696 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.679723 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.679739 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.781899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.781945 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.781957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.781976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.781989 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.885952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.886011 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.886063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.886088 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.886102 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.988879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.988938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.988952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.988972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:42 crc kubenswrapper[4801]: I1124 21:07:42.988988 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:42Z","lastTransitionTime":"2025-11-24T21:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.034512 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" event={"ID":"84a46587-3f32-40de-a806-a33918e9d29a","Type":"ContainerStarted","Data":"953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.034594 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" event={"ID":"84a46587-3f32-40de-a806-a33918e9d29a","Type":"ContainerStarted","Data":"31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.034615 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" event={"ID":"84a46587-3f32-40de-a806-a33918e9d29a","Type":"ContainerStarted","Data":"8d7b36c7912a8a7b92f52482330180cf78b25451af71e1780fb240b4932751ca"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.038090 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/1.log" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.044492 4801 scope.go:117] "RemoveContainer" containerID="280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c" Nov 24 21:07:43 crc kubenswrapper[4801]: E1124 21:07:43.044825 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.060280 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.092431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.092490 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.092506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.092532 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.092553 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.094163 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.113273 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.134773 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.159170 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.178241 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.190436 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.194927 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.194985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.194997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.195017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.195029 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.206046 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.220563 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.238409 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.253176 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.274981 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0141e477ac005b82b2099507c9ecd9447ed95afbd9cffac9c398431b0c49ff62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"message\\\":\\\"1124 21:07:39.412069 6003 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:39.412074 6003 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:07:39.412085 6003 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:07:39.412101 6003 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:07:39.412126 6003 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:07:39.412134 6003 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:07:39.412161 6003 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:07:39.412160 6003 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:07:39.412175 6003 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:07:39.412182 6003 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:39.412214 6003 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:07:39.412209 6003 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:39.412252 6003 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:39.412258 6003 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:39.412336 6003 factory.go:656] Stopping watch factory\\\\nI1124 21:07:39.412356 6003 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.290492 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.297607 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.297647 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.297656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.297681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.297692 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.308593 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.324403 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.335269 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.347691 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.359463 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.372172 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.398577 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.400923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.400964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.400976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.400998 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.401010 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.415189 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.441029 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.485520 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.504562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.504604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.504621 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.504643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.504658 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.519821 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.558413 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.599884 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.608876 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.609224 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.609390 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.609555 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.609662 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.644482 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.662841 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:43 crc kubenswrapper[4801]: E1124 21:07:43.663021 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.681908 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.718828 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.719254 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.719353 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.719480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.719576 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.721758 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.723040 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-llnf4"] Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.723703 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:43 crc kubenswrapper[4801]: E1124 21:07:43.723831 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.771990 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.796494 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.822578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.823055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.823345 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.823655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.823996 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.838092 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.879773 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.887229 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.887414 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5875t\" (UniqueName: \"kubernetes.io/projected/3434122b-ad4c-40f8-89fc-8829fd158ae3-kube-api-access-5875t\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.927234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.927612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.927744 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.927842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.927931 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:43Z","lastTransitionTime":"2025-11-24T21:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.936309 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.958237 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:43Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.988781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:43 crc kubenswrapper[4801]: I1124 21:07:43.988924 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5875t\" (UniqueName: \"kubernetes.io/projected/3434122b-ad4c-40f8-89fc-8829fd158ae3-kube-api-access-5875t\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:43 crc kubenswrapper[4801]: E1124 21:07:43.989137 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:43 crc kubenswrapper[4801]: E1124 21:07:43.989338 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:44.489293961 +0000 UTC m=+36.571880851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.008896 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.031250 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.031327 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.031345 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.031434 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.031455 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.032630 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5875t\" (UniqueName: \"kubernetes.io/projected/3434122b-ad4c-40f8-89fc-8829fd158ae3-kube-api-access-5875t\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.067837 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.104983 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.134574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.134647 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.134662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.134679 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.134694 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.140898 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.181487 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.226689 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.237574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.237629 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.237649 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.237676 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.237695 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.262925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.301166 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.338619 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.341669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.341737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.341759 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.341797 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.341820 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.386688 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.394177 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.394312 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.394359 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.394443 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:08:00.394425468 +0000 UTC m=+52.477012138 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.394566 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.394721 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:00.394692726 +0000 UTC m=+52.477279606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.394568 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.394847 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:00.39481346 +0000 UTC m=+52.477400300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.417634 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.445414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.445466 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.445478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.445496 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.445507 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.464468 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.495074 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.495163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.495249 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.495533 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.495580 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.495605 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.495720 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:00.495690562 +0000 UTC m=+52.578277272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.496335 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.496428 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.496453 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.496516 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:00.496495386 +0000 UTC m=+52.579082096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.496617 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.496670 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:45.496650021 +0000 UTC m=+37.579236721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.505519 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.537959 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:44Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.548966 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.549036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.549048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.549063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.549075 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.652415 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.652486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.652512 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.652550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.652575 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.663779 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.663970 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.664458 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:44 crc kubenswrapper[4801]: E1124 21:07:44.664697 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.755921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.755995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.756023 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.756054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.756080 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.859746 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.860233 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.860470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.860630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.860771 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.965260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.965323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.965346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.965407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:44 crc kubenswrapper[4801]: I1124 21:07:44.965435 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:44Z","lastTransitionTime":"2025-11-24T21:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.067880 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.068520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.069287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.069508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.069693 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.173638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.173715 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.173742 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.173783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.173812 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.265598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.265830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.265939 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.266057 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.266163 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.281584 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.287318 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.287385 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.287399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.287420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.287434 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.301102 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.306567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.306638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.306662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.306691 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.306715 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.325884 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.330898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.330949 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.330970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.330992 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.331008 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.345219 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.349942 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.349991 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.350003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.350025 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.350036 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.364040 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.364292 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.366438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.366502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.366520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.366551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.366570 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.470392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.470441 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.470456 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.470475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.470489 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.507327 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.507546 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.507678 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:47.507653431 +0000 UTC m=+39.590240171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.575173 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.575814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.575831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.575855 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.575871 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.663993 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.664091 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.664601 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.664857 4801 scope.go:117] "RemoveContainer" containerID="06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075" Nov 24 21:07:45 crc kubenswrapper[4801]: E1124 21:07:45.664920 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.679338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.679737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.679868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.679961 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.680043 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.783319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.783406 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.783420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.783442 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.783456 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.886196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.886252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.886267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.886293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.886310 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.990107 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.990197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.990217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.990252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:45 crc kubenswrapper[4801]: I1124 21:07:45.990275 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:45Z","lastTransitionTime":"2025-11-24T21:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.058132 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.060826 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.061296 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.077507 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.091471 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.093287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.093321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.093332 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.093351 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.093378 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.104144 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.125264 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.149629 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.167139 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.185697 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.196773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.196845 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.196873 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.196906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.196998 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.202914 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.219725 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.240321 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.261975 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.281453 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.300480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.300537 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.300554 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.300575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.300589 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.305426 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.331736 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.345904 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.359845 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.377011 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:46Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.403841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.403896 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.403914 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.403940 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.403955 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.506688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.506759 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.506775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.506799 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.506814 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.609464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.609525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.609540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.609566 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.609582 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.663338 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.663532 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:46 crc kubenswrapper[4801]: E1124 21:07:46.663620 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:46 crc kubenswrapper[4801]: E1124 21:07:46.663759 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.712811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.712904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.712916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.712939 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.712952 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.816666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.816726 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.816736 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.816760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.816775 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.924413 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.924467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.924485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.924510 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:46 crc kubenswrapper[4801]: I1124 21:07:46.924526 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:46Z","lastTransitionTime":"2025-11-24T21:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.028506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.028569 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.028583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.028613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.028628 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.131496 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.131549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.131562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.131587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.131607 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.234335 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.234407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.234422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.234441 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.234453 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.337204 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.337259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.337278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.337299 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.337310 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.440399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.440491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.440505 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.440531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.440551 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.530929 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:47 crc kubenswrapper[4801]: E1124 21:07:47.531281 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:47 crc kubenswrapper[4801]: E1124 21:07:47.531454 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:51.531419943 +0000 UTC m=+43.614006833 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.543330 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.543400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.543411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.543431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.543442 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.645994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.646058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.646074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.646096 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.646112 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.663657 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.663664 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:47 crc kubenswrapper[4801]: E1124 21:07:47.663879 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:47 crc kubenswrapper[4801]: E1124 21:07:47.663967 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.748391 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.748473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.748487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.748507 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.748547 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.851334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.851416 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.851431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.851454 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.851471 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.954698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.954768 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.954786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.954810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:47 crc kubenswrapper[4801]: I1124 21:07:47.954829 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:47Z","lastTransitionTime":"2025-11-24T21:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.057776 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.057856 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.057875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.057905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.057926 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.166863 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.166979 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.166995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.167013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.167044 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.270994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.271116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.271145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.271187 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.271216 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.374764 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.374844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.374865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.374897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.374920 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.478810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.478894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.478918 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.478951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.479052 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.581918 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.582190 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.582318 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.582437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.582531 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.663138 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.663152 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:48 crc kubenswrapper[4801]: E1124 21:07:48.663423 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:48 crc kubenswrapper[4801]: E1124 21:07:48.663460 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.683868 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.686045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.686084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.686097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.686118 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.686133 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.702723 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.719019 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.750958 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.768335 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.789257 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.789318 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.789343 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.789410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.789436 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.789706 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.809024 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.829505 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.845408 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.861223 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.877768 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.892034 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.892100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.892114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.892135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.892148 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.892457 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.908933 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.927593 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.938860 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.951801 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.970887 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.995207 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.995260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.995272 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.995290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:48 crc kubenswrapper[4801]: I1124 21:07:48.995397 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:48Z","lastTransitionTime":"2025-11-24T21:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.099906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.099985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.099997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.100018 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.100031 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.203444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.203488 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.203501 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.203522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.203534 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.306807 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.306928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.306946 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.306972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.306992 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.409885 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.409942 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.409958 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.409978 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.409994 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.498039 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.499493 4801 scope.go:117] "RemoveContainer" containerID="280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c" Nov 24 21:07:49 crc kubenswrapper[4801]: E1124 21:07:49.499789 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.516103 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.516174 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.516191 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.516217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.516236 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.619344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.619414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.619426 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.619447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.619461 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.663242 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.663274 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:49 crc kubenswrapper[4801]: E1124 21:07:49.663386 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:49 crc kubenswrapper[4801]: E1124 21:07:49.663554 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.722996 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.723094 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.723110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.723137 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.723152 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.827262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.827630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.827778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.827920 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.828045 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.931497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.931563 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.931586 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.931617 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:49 crc kubenswrapper[4801]: I1124 21:07:49.931638 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:49Z","lastTransitionTime":"2025-11-24T21:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.034645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.034737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.034757 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.034782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.034800 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.138615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.139098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.139290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.139542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.139781 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.244012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.244074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.244092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.244120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.244139 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.347991 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.348063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.348082 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.348276 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.348304 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.451743 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.451810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.451833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.451859 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.451882 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.555075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.555153 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.555176 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.555209 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.555232 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.659486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.659558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.659575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.659601 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.659620 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.663330 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.663530 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:50 crc kubenswrapper[4801]: E1124 21:07:50.663723 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:50 crc kubenswrapper[4801]: E1124 21:07:50.664514 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.770005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.770084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.770148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.770205 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.770233 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.873542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.873600 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.873618 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.873642 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.873660 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.977081 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.977556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.977743 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.977909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:50 crc kubenswrapper[4801]: I1124 21:07:50.978152 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:50Z","lastTransitionTime":"2025-11-24T21:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.080917 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.081020 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.081058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.081097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.081132 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.185126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.185211 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.185230 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.185261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.185280 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.288271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.288339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.288357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.288412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.288432 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.391899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.391970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.392001 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.392034 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.392058 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.495986 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.496092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.496112 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.496143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.496165 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.582929 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:51 crc kubenswrapper[4801]: E1124 21:07:51.583078 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:51 crc kubenswrapper[4801]: E1124 21:07:51.583160 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:07:59.58313875 +0000 UTC m=+51.665725420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.599265 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.599425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.599451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.599484 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.599505 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.663742 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:51 crc kubenswrapper[4801]: E1124 21:07:51.663946 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.664080 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:51 crc kubenswrapper[4801]: E1124 21:07:51.664357 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.703037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.703112 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.703133 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.703168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.703252 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.806637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.806703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.806720 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.806747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.806766 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.909859 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.909936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.909956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.909981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:51 crc kubenswrapper[4801]: I1124 21:07:51.909999 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:51Z","lastTransitionTime":"2025-11-24T21:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.013180 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.013262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.013289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.013320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.013343 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.116574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.116641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.116659 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.116685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.116703 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.219727 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.219798 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.219819 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.219844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.219901 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.323042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.323122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.323142 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.323199 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.323219 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.427424 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.427506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.427528 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.427557 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.427580 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.531502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.531593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.531617 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.531675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.531700 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.635987 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.636054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.636112 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.636140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.636161 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.663688 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.663833 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:52 crc kubenswrapper[4801]: E1124 21:07:52.663924 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:52 crc kubenswrapper[4801]: E1124 21:07:52.664052 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.740519 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.740608 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.740630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.740662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.740685 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.843530 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.843620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.843642 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.843676 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.843697 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.950269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.950337 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.950356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.950429 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:52 crc kubenswrapper[4801]: I1124 21:07:52.950448 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:52Z","lastTransitionTime":"2025-11-24T21:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.054768 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.054833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.054851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.054876 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.054894 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.158790 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.158879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.158899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.158934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.158956 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.262475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.262549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.262561 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.262583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.262597 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.365505 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.365555 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.365567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.365585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.365598 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.469078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.469150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.469169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.469201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.469220 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.572658 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.572718 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.572735 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.572761 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.572781 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.663411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.663411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:53 crc kubenswrapper[4801]: E1124 21:07:53.663762 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:53 crc kubenswrapper[4801]: E1124 21:07:53.663827 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.675599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.675662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.675687 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.675716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.675737 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.778595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.778683 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.778707 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.778740 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.778766 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.882122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.882184 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.882200 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.882226 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.882244 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.985829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.985895 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.985915 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.985942 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:53 crc kubenswrapper[4801]: I1124 21:07:53.985960 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:53Z","lastTransitionTime":"2025-11-24T21:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.088957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.089017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.089036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.089060 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.089077 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.191763 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.191825 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.191843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.191868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.191886 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.295566 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.295616 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.295627 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.295645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.295656 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.399287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.399396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.399418 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.399450 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.399470 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.502262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.502329 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.502349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.502403 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.502422 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.605612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.605646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.605656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.605676 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.605695 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.663508 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:54 crc kubenswrapper[4801]: E1124 21:07:54.663651 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.663516 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:54 crc kubenswrapper[4801]: E1124 21:07:54.663846 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.708319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.708411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.708432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.708457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.708476 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.811087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.811150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.811158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.811173 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.811201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.914646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.914771 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.914790 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.914818 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:54 crc kubenswrapper[4801]: I1124 21:07:54.914835 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:54Z","lastTransitionTime":"2025-11-24T21:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.018243 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.018292 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.018308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.018331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.018349 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.122205 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.122249 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.122263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.122288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.122303 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.225773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.225811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.225822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.225839 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.225848 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.328581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.328623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.328632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.328648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.328661 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.431884 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.431931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.431943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.431962 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.431974 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.535602 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.535692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.535708 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.535732 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.535748 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.622336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.622425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.622445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.622474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.622497 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.647648 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.653691 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.653774 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.653792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.653814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.653830 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.663987 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.664128 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.664242 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.664454 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.673585 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.678286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.678332 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.678350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.678407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.678430 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.698711 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.705817 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.705869 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.705893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.705922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.705942 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.725969 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.731134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.731194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.731214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.731240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.731261 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.750764 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:55 crc kubenswrapper[4801]: E1124 21:07:55.751249 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.754035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.754095 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.754110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.754134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.754151 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.857920 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.857980 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.857996 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.858019 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.858035 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.961464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.961516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.961534 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.961560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:55 crc kubenswrapper[4801]: I1124 21:07:55.961580 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:55Z","lastTransitionTime":"2025-11-24T21:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.064868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.064926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.064944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.064970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.064989 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.167921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.167988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.168006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.168035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.168057 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.270934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.270993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.271010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.271035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.271054 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.374190 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.374242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.374261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.374287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.374306 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.477124 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.477174 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.477193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.477215 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.477233 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.580824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.580917 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.580936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.580962 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.580980 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.663184 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.663220 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:56 crc kubenswrapper[4801]: E1124 21:07:56.663390 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:56 crc kubenswrapper[4801]: E1124 21:07:56.663545 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.683503 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.683538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.683548 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.683562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.683572 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.786354 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.786457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.786477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.786502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.786522 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.889881 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.889948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.889986 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.890024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.890049 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.992748 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.992787 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.992797 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.992816 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:56 crc kubenswrapper[4801]: I1124 21:07:56.992828 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:56Z","lastTransitionTime":"2025-11-24T21:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.096103 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.096587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.096635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.096669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.096692 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.199444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.199515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.199539 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.199568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.199593 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.302710 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.302784 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.302806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.302835 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.302857 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.406578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.406642 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.406695 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.406722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.406745 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.509603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.509698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.509717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.509743 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.509763 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.613255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.613336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.613355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.613420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.613441 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.663239 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:57 crc kubenswrapper[4801]: E1124 21:07:57.663475 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.663649 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:57 crc kubenswrapper[4801]: E1124 21:07:57.663921 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.716834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.716894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.716917 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.716944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.716966 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.820449 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.820525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.820547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.820582 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.820600 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.924115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.924240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.924259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.924286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:57 crc kubenswrapper[4801]: I1124 21:07:57.924328 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:57Z","lastTransitionTime":"2025-11-24T21:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.027286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.027417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.027441 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.027502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.027525 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.130676 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.130727 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.130747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.130774 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.130795 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.234002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.234511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.234667 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.234822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.234969 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.338493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.338909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.339075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.339276 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.339468 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.442455 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.442516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.442540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.442571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.442590 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.545406 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.545463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.545472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.545494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.545506 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.648683 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.648763 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.648782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.648810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.648828 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.663273 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:07:58 crc kubenswrapper[4801]: E1124 21:07:58.663528 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.663662 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:07:58 crc kubenswrapper[4801]: E1124 21:07:58.663831 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.682202 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.698719 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.716493 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.731754 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.751771 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.752026 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.752174 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.752335 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.752495 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.756233 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.776723 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.796937 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.818960 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.837847 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.856149 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.856330 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.856425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.856524 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.856626 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.858730 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.879286 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.898926 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.923965 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.940785 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.960120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.960270 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.960352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.960459 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.960542 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:58Z","lastTransitionTime":"2025-11-24T21:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.969844 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:58 crc kubenswrapper[4801]: I1124 21:07:58.983741 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.002201 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:07:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.063265 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.063342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.063360 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.063426 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.063449 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.167359 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.167458 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.167479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.167508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.167532 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.271151 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.271219 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.271239 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.271268 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.271286 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.374895 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.375463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.375688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.375983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.376174 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.479020 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.479067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.479077 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.479099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.479112 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.581490 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.581564 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.581586 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.581616 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.581638 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.649704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:59 crc kubenswrapper[4801]: E1124 21:07:59.649910 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:59 crc kubenswrapper[4801]: E1124 21:07:59.649964 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:15.649945556 +0000 UTC m=+67.732532246 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.663247 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.663512 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:07:59 crc kubenswrapper[4801]: E1124 21:07:59.663686 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:07:59 crc kubenswrapper[4801]: E1124 21:07:59.664345 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.665013 4801 scope.go:117] "RemoveContainer" containerID="280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.684551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.684606 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.684622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.684642 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.684657 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.788197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.788841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.788855 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.788879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.788893 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.891349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.891429 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.891446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.891477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.891501 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.994647 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.994699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.994714 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.994734 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:07:59 crc kubenswrapper[4801]: I1124 21:07:59.994748 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:07:59Z","lastTransitionTime":"2025-11-24T21:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.097552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.097624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.097636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.097657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.097671 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.200267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.200314 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.200326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.200341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.200353 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.303101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.303143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.303156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.303175 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.303189 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.406116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.406167 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.406179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.406200 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.406214 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.459788 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.459933 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.460091 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:08:32.460038137 +0000 UTC m=+84.542624827 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.460111 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.460183 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.460215 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:32.460192371 +0000 UTC m=+84.542779051 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.460351 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.460450 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:32.460430649 +0000 UTC m=+84.543017329 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.509256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.509299 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.509308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.509327 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.509340 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.561125 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561302 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561335 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561353 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561439 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:32.561419714 +0000 UTC m=+84.644006434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.561433 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561578 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561600 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561616 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.561683 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:32.561666232 +0000 UTC m=+84.644252922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.612098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.612156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.612172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.612200 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.612218 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.615898 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/1.log" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.620080 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.620551 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.647287 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.663513 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.663587 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.663682 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:00 crc kubenswrapper[4801]: E1124 21:08:00.663907 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.669250 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.686014 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.701496 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.715452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.715498 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.715511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.715532 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.715546 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.717823 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.730558 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.746161 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.758746 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.769073 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.784948 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.798639 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.812787 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.818188 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.818236 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.818251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.818269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.818283 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.827868 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.847543 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.859868 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.868868 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.881630 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:00Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.921805 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.921852 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.921867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.921886 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:00 crc kubenswrapper[4801]: I1124 21:08:00.921898 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:00Z","lastTransitionTime":"2025-11-24T21:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.024384 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.024434 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.024448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.024470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.024483 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.134144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.134217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.134239 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.134262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.134278 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.236933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.236984 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.236992 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.237007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.237017 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.340945 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.341010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.341034 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.341065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.341089 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.444770 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.444823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.444861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.444880 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.444895 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.547636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.547700 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.547718 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.547747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.547767 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.626207 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/2.log" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.627426 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/1.log" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.631160 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97" exitCode=1 Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.631200 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.631308 4801 scope.go:117] "RemoveContainer" containerID="280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.631894 4801 scope.go:117] "RemoveContainer" containerID="89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97" Nov 24 21:08:01 crc kubenswrapper[4801]: E1124 21:08:01.632091 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.651059 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.651137 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.651162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.651195 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.651219 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.653771 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.663642 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.663691 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:01 crc kubenswrapper[4801]: E1124 21:08:01.663757 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:01 crc kubenswrapper[4801]: E1124 21:08:01.663875 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.670001 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.691236 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.709939 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.742863 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://280f9bc13f3d40c3d8da85497664d753e812ade6362c13e858ff2d4b53cf914c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:07:41.303446 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:07:41.303210 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 21:07:41.303615 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:07:41.303627 6219 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:07:41.303632 6219 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:07:41.303638 6219 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:07:41.303689 6219 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303730 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303890 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.303960 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.304216 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:07:41.305282 6219 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.754796 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.755114 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.756182 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.756217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.756243 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.756260 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.767152 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.779643 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.795388 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.805617 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.816355 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.833867 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.846575 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.857748 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.858932 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.858981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.858999 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.859024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.859041 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.871087 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.881650 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.891280 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.961931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.961993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.962007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.962031 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:01 crc kubenswrapper[4801]: I1124 21:08:01.962046 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:01Z","lastTransitionTime":"2025-11-24T21:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.064472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.064504 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.064512 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.064526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.064537 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.168240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.168322 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.168347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.168419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.168438 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.272128 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.272196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.272215 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.272241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.272261 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.375861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.375970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.375995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.376015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.376034 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.478829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.478905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.478938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.478967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.478990 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.583793 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.583851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.583866 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.583888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.583905 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.638717 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/2.log" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.643561 4801 scope.go:117] "RemoveContainer" containerID="89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97" Nov 24 21:08:02 crc kubenswrapper[4801]: E1124 21:08:02.643931 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.656689 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.663422 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.663502 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:02 crc kubenswrapper[4801]: E1124 21:08:02.663971 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:02 crc kubenswrapper[4801]: E1124 21:08:02.664347 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.668119 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.686075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.686140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.686159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.686184 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.686199 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.693251 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.709267 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.724898 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.744649 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.769921 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.788668 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.790533 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.790576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.790597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.790626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.790649 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.804081 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.819933 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.836188 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.850587 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.875309 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.893823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.893876 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.893889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.893930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.893953 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.898009 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.915181 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.935286 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.948035 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.997039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.997086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.997100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.997126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:02 crc kubenswrapper[4801]: I1124 21:08:02.997151 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:02Z","lastTransitionTime":"2025-11-24T21:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.100494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.100533 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.100541 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.100560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.100570 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.203704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.203749 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.203811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.203828 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.203838 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.307488 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.307550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.307571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.307599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.307702 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.410661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.410717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.410736 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.410760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.410779 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.513316 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.513358 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.513394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.513413 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.513427 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.616922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.616970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.616982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.617003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.617018 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.663455 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:03 crc kubenswrapper[4801]: E1124 21:08:03.663699 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.663479 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:03 crc kubenswrapper[4801]: E1124 21:08:03.664229 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.720099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.720149 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.720168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.720195 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.720222 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.822867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.822933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.822956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.823177 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.823271 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.925987 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.926051 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.926074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.926107 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:03 crc kubenswrapper[4801]: I1124 21:08:03.926129 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:03Z","lastTransitionTime":"2025-11-24T21:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.029427 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.029497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.029515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.029546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.029563 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.059954 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.075438 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.090921 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.105063 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.118710 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.130827 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.132688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.132726 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.132738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.132760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.132777 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.153029 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.170602 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.184544 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.197357 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.216897 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.231141 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.235281 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.235332 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.235344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.235380 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.235393 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.244202 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.261489 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.275237 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.290145 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.308925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.325041 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.337750 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.337776 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.337784 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.337799 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.337810 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.440928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.441008 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.441022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.441048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.441060 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.544414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.544473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.544486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.544508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.544523 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.648084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.648150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.648166 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.648192 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.648204 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.662910 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.662944 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:04 crc kubenswrapper[4801]: E1124 21:08:04.663067 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:04 crc kubenswrapper[4801]: E1124 21:08:04.663251 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.751789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.752583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.752634 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.752661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.752675 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.827343 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.844325 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.849126 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.854925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.855011 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.855030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.855058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.855077 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.866606 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.881948 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.893624 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.915465 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.930903 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.947594 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.958310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.958395 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.958410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.958437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.958451 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:04Z","lastTransitionTime":"2025-11-24T21:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.967053 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.983404 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:04 crc kubenswrapper[4801]: I1124 21:08:04.998670 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.016019 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.034188 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.048155 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.061059 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.061119 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.061136 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.061162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.061180 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.069394 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.094494 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.105118 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.117758 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.163903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.164018 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.164039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.164067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.164121 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.267750 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.267831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.267868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.267899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.267921 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.371185 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.371233 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.371243 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.371260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.371272 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.473834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.473878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.473889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.473906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.473918 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.576887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.576954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.576974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.576999 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.577013 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.662999 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.663072 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:05 crc kubenswrapper[4801]: E1124 21:08:05.663156 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:05 crc kubenswrapper[4801]: E1124 21:08:05.663311 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.679322 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.679416 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.679436 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.679460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.679479 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.782244 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.782291 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.782306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.782326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.782341 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.885568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.885621 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.885632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.885655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.885671 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.988694 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.988762 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.988800 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.988842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:05 crc kubenswrapper[4801]: I1124 21:08:05.988915 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:05Z","lastTransitionTime":"2025-11-24T21:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.091444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.091552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.091595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.091636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.091675 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.144762 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.144851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.144877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.144914 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.144941 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.202722 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.212677 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.212735 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.212754 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.212777 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.212795 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.233190 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.237588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.237624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.237636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.237656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.237671 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.251578 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.255868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.255925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.255941 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.255960 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.255973 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.274289 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.279717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.279753 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.279762 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.279778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.279791 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.296402 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.296515 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.298380 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.298420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.298436 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.298451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.298462 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.401783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.401851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.401876 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.401915 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.401937 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.505312 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.505416 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.505429 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.505451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.505466 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.608695 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.608769 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.608794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.608830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.608853 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.663079 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.663079 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.663318 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:06 crc kubenswrapper[4801]: E1124 21:08:06.663475 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.711877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.711943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.711961 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.711993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.712018 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.815614 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.815675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.815689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.815709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.815722 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.919213 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.919268 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.919281 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.919303 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:06 crc kubenswrapper[4801]: I1124 21:08:06.919317 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:06Z","lastTransitionTime":"2025-11-24T21:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.022724 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.022791 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.022814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.022843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.022867 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.126938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.126994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.127010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.127033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.127046 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.230494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.230562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.230573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.230595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.230607 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.333613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.333676 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.333692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.333711 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.333727 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.436551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.436611 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.436622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.436637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.436648 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.540091 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.540134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.540145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.540163 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.540175 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.643609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.643665 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.643680 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.643702 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.643714 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.662937 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.663087 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:07 crc kubenswrapper[4801]: E1124 21:08:07.663344 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:07 crc kubenswrapper[4801]: E1124 21:08:07.663660 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.747807 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.747853 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.747863 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.747880 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.747891 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.850536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.850619 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.850652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.850669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.850678 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.955017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.955098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.955116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.955142 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:07 crc kubenswrapper[4801]: I1124 21:08:07.955160 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:07Z","lastTransitionTime":"2025-11-24T21:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.058159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.058491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.058568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.058645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.058720 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.161285 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.161321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.161334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.161352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.161380 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.264021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.264082 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.264094 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.264117 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.264131 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.367567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.367635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.367659 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.367690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.367713 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.470526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.470585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.470602 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.470625 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.470642 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.574003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.574092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.574117 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.574147 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.574170 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.662946 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.663290 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:08 crc kubenswrapper[4801]: E1124 21:08:08.663853 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:08 crc kubenswrapper[4801]: E1124 21:08:08.664138 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.677903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.677969 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.677982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.678004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.678018 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.687131 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.705499 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.725243 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.745307 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.769499 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.780013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.780061 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.780075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.780100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.780114 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.794141 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.814617 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.848733 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.866172 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.882150 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.883291 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.883339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.883354 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.883407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.883426 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.901545 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.960306 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.976986 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.985830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.985890 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.985917 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.985943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.985960 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:08Z","lastTransitionTime":"2025-11-24T21:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:08 crc kubenswrapper[4801]: I1124 21:08:08.992420 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.008300 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.024785 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.037170 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.053126 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.089783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.089854 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.089872 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.089897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.089917 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.193993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.194070 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.194092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.194122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.194145 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.297987 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.298058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.298083 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.298111 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.298134 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.401306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.401399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.401423 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.401448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.401467 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.505650 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.505720 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.505745 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.505776 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.505797 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.609732 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.609793 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.609812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.609836 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.609854 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.663297 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.663298 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:09 crc kubenswrapper[4801]: E1124 21:08:09.663635 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:09 crc kubenswrapper[4801]: E1124 21:08:09.663773 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.712698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.712753 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.712765 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.712782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.712793 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.816812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.816875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.816893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.816919 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.816940 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.919910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.919974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.919985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.920004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:09 crc kubenswrapper[4801]: I1124 21:08:09.920016 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:09Z","lastTransitionTime":"2025-11-24T21:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.023339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.023452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.023472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.023498 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.023517 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.126540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.126577 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.126593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.126620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.126673 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.230263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.230312 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.230326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.230349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.230387 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.332940 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.332988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.333003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.333022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.333037 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.435071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.435116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.435128 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.435144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.435156 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.538785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.538867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.538891 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.538921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.538941 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.642139 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.642215 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.642234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.642267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.642285 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.663480 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.663502 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:10 crc kubenswrapper[4801]: E1124 21:08:10.663727 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:10 crc kubenswrapper[4801]: E1124 21:08:10.663906 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.745753 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.745819 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.745838 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.745865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.745886 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.848729 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.848774 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.848797 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.848818 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.848830 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.951688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.951739 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.951750 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.951768 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:10 crc kubenswrapper[4801]: I1124 21:08:10.951779 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:10Z","lastTransitionTime":"2025-11-24T21:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.054574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.054646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.054664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.054688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.054708 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.157898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.157954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.157973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.157997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.158020 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.261026 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.261070 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.261082 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.261098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.261112 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.363938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.363982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.363993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.364010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.364022 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.466893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.466937 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.466951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.466969 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.466981 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.570299 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.570349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.570393 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.570418 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.570436 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.663634 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.663704 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:11 crc kubenswrapper[4801]: E1124 21:08:11.663798 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:11 crc kubenswrapper[4801]: E1124 21:08:11.663919 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.672155 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.672196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.672210 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.672229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.672239 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.774827 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.774860 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.774869 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.774885 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.774896 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.878051 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.878120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.878142 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.878172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.878233 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.981861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.981890 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.981899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.981939 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:11 crc kubenswrapper[4801]: I1124 21:08:11.981953 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:11Z","lastTransitionTime":"2025-11-24T21:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.085598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.085750 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.085763 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.085780 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.085791 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.189239 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.189293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.189306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.189327 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.189343 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.292205 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.292261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.292274 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.292294 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.292306 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.395260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.395301 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.395315 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.395334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.395350 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.497962 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.498010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.498020 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.498037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.498048 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.601337 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.601442 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.601461 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.601487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.601505 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.663277 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.663291 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:12 crc kubenswrapper[4801]: E1124 21:08:12.663507 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:12 crc kubenswrapper[4801]: E1124 21:08:12.663687 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.705044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.705121 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.705146 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.705177 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.705201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.808463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.808517 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.808534 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.808558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.808576 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.911518 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.911574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.911592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.911617 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:12 crc kubenswrapper[4801]: I1124 21:08:12.911638 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:12Z","lastTransitionTime":"2025-11-24T21:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.014817 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.014865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.014882 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.014906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.014924 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.117677 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.117723 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.117739 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.117763 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.117780 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.221481 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.221552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.221571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.221597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.221618 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.323523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.323575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.323588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.323612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.323627 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.427733 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.427800 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.427810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.427837 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.427848 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.531716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.531795 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.531822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.531856 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.531881 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.635223 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.635282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.635300 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.635326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.635345 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.663256 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.663266 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:13 crc kubenswrapper[4801]: E1124 21:08:13.663497 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:13 crc kubenswrapper[4801]: E1124 21:08:13.663606 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.738084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.738124 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.738134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.738149 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.738160 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.840334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.840411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.840425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.840446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.840464 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.944108 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.944178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.944197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.944222 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:13 crc kubenswrapper[4801]: I1124 21:08:13.944248 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:13Z","lastTransitionTime":"2025-11-24T21:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.047220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.047287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.047302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.047336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.047348 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.151158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.151204 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.151216 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.151235 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.151248 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.254559 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.254635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.254652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.254675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.254687 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.357853 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.357925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.357956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.357989 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.358009 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.461640 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.461716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.461734 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.461773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.461802 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.565229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.565287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.565302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.565323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.565338 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.663459 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:14 crc kubenswrapper[4801]: E1124 21:08:14.663870 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.663894 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:14 crc kubenswrapper[4801]: E1124 21:08:14.664681 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.665620 4801 scope.go:117] "RemoveContainer" containerID="89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97" Nov 24 21:08:14 crc kubenswrapper[4801]: E1124 21:08:14.666929 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.672069 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.672113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.672133 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.672158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.672172 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.774925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.775246 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.775395 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.775529 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.775657 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.878232 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.878638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.878729 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.878816 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.878904 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.981931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.981990 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.982001 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.982020 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:14 crc kubenswrapper[4801]: I1124 21:08:14.982031 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:14Z","lastTransitionTime":"2025-11-24T21:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.085859 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.085921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.085941 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.085966 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.085985 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.188737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.188807 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.188822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.188843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.188855 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.291990 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.292058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.292090 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.292123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.292149 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.395143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.395220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.395240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.395272 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.395295 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.498298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.498392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.498410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.498438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.498454 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.602161 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.602208 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.602220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.602237 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.602248 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.663482 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.663550 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:15 crc kubenswrapper[4801]: E1124 21:08:15.663694 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:15 crc kubenswrapper[4801]: E1124 21:08:15.663817 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.705337 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.705398 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.705408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.705425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.705435 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.737222 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:15 crc kubenswrapper[4801]: E1124 21:08:15.737427 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:08:15 crc kubenswrapper[4801]: E1124 21:08:15.737542 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:08:47.737515616 +0000 UTC m=+99.820102286 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.809030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.809080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.809092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.809108 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.809118 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.912088 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.912134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.912150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.912171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:15 crc kubenswrapper[4801]: I1124 21:08:15.912184 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:15Z","lastTransitionTime":"2025-11-24T21:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.014794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.014888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.014902 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.014926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.014942 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.119577 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.119641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.119657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.119677 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.119694 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.223062 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.223122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.223135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.223167 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.223178 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.325680 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.325727 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.325737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.325756 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.325768 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.428817 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.428898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.428909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.428934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.428950 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.532448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.532550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.532569 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.532628 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.532645 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.635033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.635074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.635086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.635100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.635112 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662680 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662697 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662715 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662870 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.662882 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.662983 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.663064 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.676496 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.680490 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.680546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.680555 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.680590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.680601 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.696469 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.703169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.703246 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.703262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.703289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.703310 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.720777 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.725985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.726027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.726040 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.726059 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.726073 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.740849 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.744478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.744511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.744521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.744536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.744547 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.758173 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:16 crc kubenswrapper[4801]: E1124 21:08:16.758304 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.759588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.759615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.759626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.759641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.759651 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.862778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.862814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.862824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.862841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.862852 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.966103 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.966189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.966216 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.966245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:16 crc kubenswrapper[4801]: I1124 21:08:16.966266 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:16Z","lastTransitionTime":"2025-11-24T21:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.070335 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.070457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.070485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.070520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.070544 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.173991 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.174050 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.174072 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.174094 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.174110 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.277336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.277422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.277438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.277462 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.277480 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.380482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.380549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.380568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.380591 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.380610 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.483506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.483580 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.483594 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.483620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.483636 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.586402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.586472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.586493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.586523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.586541 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.663404 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.663400 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:17 crc kubenswrapper[4801]: E1124 21:08:17.663658 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:17 crc kubenswrapper[4801]: E1124 21:08:17.663832 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.688610 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.688669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.688687 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.688710 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.688728 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.791275 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.791355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.791390 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.791415 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.791430 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.894278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.894331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.894344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.894379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.894398 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.997507 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.997554 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.997565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.997588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:17 crc kubenswrapper[4801]: I1124 21:08:17.997604 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:17Z","lastTransitionTime":"2025-11-24T21:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.101035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.101076 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.101086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.101105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.101115 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.204054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.204103 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.204123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.204155 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.204180 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.307514 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.307558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.307569 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.307620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.307638 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.411063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.411152 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.411172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.411205 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.411232 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.515844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.515893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.515911 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.515939 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.515958 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.619931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.620000 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.620033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.620058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.620076 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.663398 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.663454 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:18 crc kubenswrapper[4801]: E1124 21:08:18.663559 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:18 crc kubenswrapper[4801]: E1124 21:08:18.663636 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.679093 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.694230 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.708471 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.711305 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/0.log" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.711351 4801 generic.go:334] "Generic (PLEG): container finished" podID="5f348c59-5453-436a-bcce-548bdef22a27" containerID="31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981" exitCode=1 Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.711409 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerDied","Data":"31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.711870 4801 scope.go:117] "RemoveContainer" containerID="31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.722422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.722463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.722475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.722492 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.722504 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.729879 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.748106 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.766624 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.787636 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.801255 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.820351 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.825588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.825627 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.825637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.825653 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.825667 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.836230 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.848538 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.859642 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.871573 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.894526 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.907665 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.920580 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.928653 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.928710 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.928723 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.928745 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.928758 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:18Z","lastTransitionTime":"2025-11-24T21:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.935785 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.948041 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.958856 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.968529 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.981712 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:18 crc kubenswrapper[4801]: I1124 21:08:18.994096 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.008680 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.032316 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.032380 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.032389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.032408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.032419 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.063729 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"2025-11-24T21:07:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123\\\\n2025-11-24T21:07:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123 to /host/opt/cni/bin/\\\\n2025-11-24T21:07:33Z [verbose] multus-daemon started\\\\n2025-11-24T21:07:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:08:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.098002 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.110970 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.124705 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.136692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.136737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.136763 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.136783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.136794 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.139250 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.152182 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.163437 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.174220 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.195716 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.209098 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.225753 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.240096 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.240159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.240175 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.240198 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.240212 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.245215 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.255793 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.343307 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.343387 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.343401 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.343439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.343451 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.446498 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.446546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.446557 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.446576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.446590 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.550015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.550076 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.550094 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.550120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.550142 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.652874 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.652943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.652968 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.653005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.653029 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.662922 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.662963 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:19 crc kubenswrapper[4801]: E1124 21:08:19.663209 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:19 crc kubenswrapper[4801]: E1124 21:08:19.663490 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.718587 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/0.log" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.718668 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerStarted","Data":"cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.737340 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"2025-11-24T21:07:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123\\\\n2025-11-24T21:07:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123 to /host/opt/cni/bin/\\\\n2025-11-24T21:07:33Z [verbose] multus-daemon started\\\\n2025-11-24T21:07:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:08:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.756145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.756193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.756213 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.756239 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.756259 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.758580 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.775439 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.790410 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.806912 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.823863 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.845825 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.858947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.859005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.859024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.859048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.859067 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.864738 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.880913 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.895506 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.908031 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.921976 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.939991 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.956515 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.961843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.962221 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.962271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.962322 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.962346 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:19Z","lastTransitionTime":"2025-11-24T21:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:19 crc kubenswrapper[4801]: I1124 21:08:19.978652 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:19Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.006208 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:20Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.020090 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:20Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.038328 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:20Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.066163 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.066247 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.066263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.066291 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.066308 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.170242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.170308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.170320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.170340 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.170355 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.273417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.273485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.273495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.273516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.273531 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.376135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.376193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.376207 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.376227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.376241 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.479333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.479414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.479429 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.479455 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.479467 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.582238 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.582291 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.582307 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.582331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.582346 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.663235 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.663471 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:20 crc kubenswrapper[4801]: E1124 21:08:20.663604 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:20 crc kubenswrapper[4801]: E1124 21:08:20.663778 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.684622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.684667 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.684681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.684694 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.684706 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.787256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.787301 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.787309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.787326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.787335 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.890459 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.890526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.890543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.890571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.890594 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.993861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.993924 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.993946 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.993976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:20 crc kubenswrapper[4801]: I1124 21:08:20.993998 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:20Z","lastTransitionTime":"2025-11-24T21:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.097333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.097405 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.097419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.097449 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.097464 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.200734 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.200778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.200789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.200809 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.200821 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.303901 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.303952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.303964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.303983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.303999 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.407031 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.407093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.407105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.407127 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.407140 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.510529 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.510596 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.510620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.510655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.510679 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.613150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.613206 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.613224 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.613247 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.613266 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.663998 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.664057 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:21 crc kubenswrapper[4801]: E1124 21:08:21.664297 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:21 crc kubenswrapper[4801]: E1124 21:08:21.664450 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.715592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.715643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.715663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.715685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.715703 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.818986 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.819037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.819048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.819073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.819086 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.922080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.922142 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.922159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.922184 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:21 crc kubenswrapper[4801]: I1124 21:08:21.922201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:21Z","lastTransitionTime":"2025-11-24T21:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.025900 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.025938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.025947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.025963 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.025973 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.139947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.140000 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.140013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.140036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.140048 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.243852 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.243956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.243968 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.243988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.243999 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.346269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.346323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.346338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.346359 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.346387 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.449790 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.449864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.449891 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.449923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.449947 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.553584 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.553645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.553663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.553691 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.553712 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.657169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.657271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.657298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.657333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.657355 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.663659 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.663671 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:22 crc kubenswrapper[4801]: E1124 21:08:22.664032 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:22 crc kubenswrapper[4801]: E1124 21:08:22.664293 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.761548 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.761733 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.761823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.761913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.761949 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.865721 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.865789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.865810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.865839 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.865860 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.969598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.969676 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.969699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.969729 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:22 crc kubenswrapper[4801]: I1124 21:08:22.969749 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:22Z","lastTransitionTime":"2025-11-24T21:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.073530 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.073599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.073618 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.073656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.073684 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.176935 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.177015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.177036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.177063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.177082 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.279904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.279952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.279965 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.279983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.279999 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.382717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.382782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.382795 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.382821 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.382836 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.486065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.486129 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.486147 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.486172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.486190 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.589065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.589139 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.589165 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.589199 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.589228 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.663810 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.663931 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:23 crc kubenswrapper[4801]: E1124 21:08:23.664232 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:23 crc kubenswrapper[4801]: E1124 21:08:23.664278 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.692478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.692530 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.692550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.692575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.692597 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.795865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.795952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.795972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.795997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.796014 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.900030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.900099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.900117 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.900140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:23 crc kubenswrapper[4801]: I1124 21:08:23.900156 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:23Z","lastTransitionTime":"2025-11-24T21:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.003521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.003576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.003588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.003606 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.003623 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.106992 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.107048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.107068 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.107095 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.107113 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.210466 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.210534 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.210553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.210582 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.210602 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.314913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.314984 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.315006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.315035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.315056 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.418564 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.418632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.418651 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.418680 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.418700 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.527728 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.527808 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.527830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.527860 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.527888 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.631970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.632426 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.632558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.632689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.632775 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.663922 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.663921 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:24 crc kubenswrapper[4801]: E1124 21:08:24.664183 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:24 crc kubenswrapper[4801]: E1124 21:08:24.664409 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.735483 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.735524 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.735532 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.735547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.735588 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.839065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.839127 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.839145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.839172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.839191 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.943039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.943080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.943090 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.943107 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:24 crc kubenswrapper[4801]: I1124 21:08:24.943148 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:24Z","lastTransitionTime":"2025-11-24T21:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.045844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.046119 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.046182 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.046242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.046301 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.149908 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.149992 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.150015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.150045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.150065 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.253744 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.253806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.253824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.253848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.253928 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.357834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.357888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.357906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.357931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.357948 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.460595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.460654 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.460671 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.460696 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.460713 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.563609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.563970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.564210 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.564553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.564773 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.663049 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.663049 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:25 crc kubenswrapper[4801]: E1124 21:08:25.663256 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:25 crc kubenswrapper[4801]: E1124 21:08:25.663903 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.664480 4801 scope.go:117] "RemoveContainer" containerID="89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.668504 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.668543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.668555 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.668575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.668592 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.771441 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.771477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.771488 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.771568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.771583 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.874214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.874252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.874263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.874280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.874290 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.977092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.977145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.977158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.977183 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:25 crc kubenswrapper[4801]: I1124 21:08:25.977195 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:25Z","lastTransitionTime":"2025-11-24T21:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.080440 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.080475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.080487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.080507 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.080520 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.183231 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.183271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.183279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.183296 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.183306 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.286838 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.286891 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.286903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.286924 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.286939 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.389531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.389640 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.389656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.389681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.389965 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.493790 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.493868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.493901 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.493951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.493986 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.597386 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.597436 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.597452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.597473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.597485 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.663269 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:26 crc kubenswrapper[4801]: E1124 21:08:26.663496 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.663876 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:26 crc kubenswrapper[4801]: E1124 21:08:26.663980 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.700657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.700718 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.700735 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.700769 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.700785 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.753746 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/2.log" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.759136 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.759933 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.782925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"2025-11-24T21:07:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123\\\\n2025-11-24T21:07:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123 to /host/opt/cni/bin/\\\\n2025-11-24T21:07:33Z [verbose] multus-daemon started\\\\n2025-11-24T21:07:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:08:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.804227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.804262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.804271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.804289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.804302 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.808406 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.824360 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.844288 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.862419 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.878231 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.898005 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.907574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.907622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.907639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.907658 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.907672 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:26Z","lastTransitionTime":"2025-11-24T21:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.916655 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.934696 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.951037 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.968794 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:26 crc kubenswrapper[4801]: I1124 21:08:26.987016 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.004851 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.010034 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.010080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.010093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.010111 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.010123 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.023530 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.037933 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.071077 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.087646 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.107417 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.112230 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.112295 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.112309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.112330 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.112348 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.146040 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.146114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.146124 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.146143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.146155 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.165020 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.170056 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.170119 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.170139 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.170166 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.170187 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.188741 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.194322 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.194469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.194497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.194533 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.194559 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.214881 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.220141 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.220196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.220210 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.220230 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.220242 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.237674 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.243267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.243331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.243350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.243724 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.243766 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.265717 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ca2cf578-585b-4133-99fc-dae8e3b13777\\\",\\\"systemUUID\\\":\\\"19e68446-b369-4df2-90ee-d6f4eb03379d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.265886 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.268227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.268487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.268506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.268527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.268540 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.371980 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.372027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.372041 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.372061 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.372072 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.476158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.476505 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.476624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.476709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.476777 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.581020 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.581113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.581137 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.581171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.581200 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.663896 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.663930 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.664181 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.664304 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.685525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.685568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.685579 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.685598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.685611 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.766142 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/3.log" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.767181 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/2.log" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.770805 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" exitCode=1 Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.770923 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.771011 4801 scope.go:117] "RemoveContainer" containerID="89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.771932 4801 scope.go:117] "RemoveContainer" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" Nov 24 21:08:27 crc kubenswrapper[4801]: E1124 21:08:27.772197 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.788339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.788485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.788508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.788531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.788546 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.799934 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.817329 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.836478 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.859497 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.891197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.891252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.891269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.891297 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.891330 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.892724 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.932993 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.962558 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.978799 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.994209 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.994236 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.994245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.994260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.994273 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:27Z","lastTransitionTime":"2025-11-24T21:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:27 crc kubenswrapper[4801]: I1124 21:08:27.999728 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.019702 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"2025-11-24T21:07:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123\\\\n2025-11-24T21:07:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123 to /host/opt/cni/bin/\\\\n2025-11-24T21:07:33Z [verbose] multus-daemon started\\\\n2025-11-24T21:07:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:08:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.040302 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:26Z\\\",\\\"message\\\":\\\"r removal\\\\nI1124 21:08:26.899623 6807 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:08:26.899661 6807 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:08:26.899681 6807 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:08:26.899730 6807 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:08:26.899769 6807 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:08:26.899767 6807 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:08:26.899790 6807 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:08:26.899799 6807 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:08:26.899824 6807 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:08:26.899831 6807 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:08:26.899871 6807 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:08:26.899840 6807 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:08:26.899887 6807 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:08:26.899851 6807 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:08:26.899912 6807 factory.go:656] Stopping watch factory\\\\nI1124 21:08:26.899949 6807 ovnkube.go:599] Stopped ovnkube\\\\nI1124 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.052920 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.065592 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.086769 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.097469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.097538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.097558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.097583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.097601 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.106553 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.122460 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.142290 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.158594 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.200485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.200550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.200568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.200594 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.200614 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.303878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.303941 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.303953 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.303977 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.303989 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.407595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.407669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.407688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.407716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.407738 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.512686 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.512767 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.512792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.512828 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.512854 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.615996 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.617448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.617555 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.617694 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.617798 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.663865 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.663923 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:28 crc kubenswrapper[4801]: E1124 21:08:28.664639 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:28 crc kubenswrapper[4801]: E1124 21:08:28.664810 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.683593 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.688031 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.706607 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.720786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.720823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.720834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.720851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.720863 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.725339 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.748436 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.763873 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.778169 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/3.log" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.783224 4801 scope.go:117] "RemoveContainer" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" Nov 24 21:08:28 crc kubenswrapper[4801]: E1124 21:08:28.783438 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.793017 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.810455 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.829134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.829168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.829179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.829197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.829210 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.829558 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.853717 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.869675 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.889720 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.914286 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.931843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.931886 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.931896 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.931910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.931922 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:28Z","lastTransitionTime":"2025-11-24T21:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.933497 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.949944 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.969160 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"2025-11-24T21:07:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123\\\\n2025-11-24T21:07:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123 to /host/opt/cni/bin/\\\\n2025-11-24T21:07:33Z [verbose] multus-daemon started\\\\n2025-11-24T21:07:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:08:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:28 crc kubenswrapper[4801]: I1124 21:08:28.999069 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e48aa0b2c1531a38b671b191065b037f13feea9809d83591fa636216560b97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:00Z\\\",\\\"message\\\":\\\"Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669036 6456 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:08:00.669110 6456 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 21:08:00.669169 6456 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 21:08:00.669581 6456 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:08:00.669680 6456 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:08:00.669726 6456 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:08:00.669766 6456 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:08:00.669864 6456 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:26Z\\\",\\\"message\\\":\\\"r removal\\\\nI1124 21:08:26.899623 6807 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:08:26.899661 6807 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:08:26.899681 6807 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:08:26.899730 6807 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:08:26.899769 6807 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:08:26.899767 6807 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:08:26.899790 6807 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:08:26.899799 6807 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:08:26.899824 6807 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:08:26.899831 6807 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:08:26.899871 6807 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:08:26.899840 6807 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:08:26.899887 6807 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:08:26.899851 6807 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:08:26.899912 6807 factory.go:656] Stopping watch factory\\\\nI1124 21:08:26.899949 6807 ovnkube.go:599] Stopped ovnkube\\\\nI1124 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.014979 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.029168 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.034888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.034936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.034956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.034981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.035000 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.059309 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca25d19b-873c-4511-b126-f5c5d91d64f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472d29067b7818890700d453f099397d3cb27ac1ac29ecdd6f1989afe4c844fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d332a4e446a657b3c8341a8ad2f1f64a693fd769f8a2ebc89016172cd754e122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce6171351717b8bd7cde31d1efcfd96aa0a371ddb2e72dbf9f96a7c00aaae4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b14d155d98f0630ae972bef22397950e2b596bc3c90c8eb6bd15c9ce6677879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9eb4268203a82141062d54a34136f7478c7af71e162abebd077c7ec4baef79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cab70e100c87af2065d42b3cc087a05fe009573d374d00dcf02fbe1c7d5aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e59c783d4ebc2327d666030b41cd977f0e8d6eb9b0853c3ed9334a6595e129d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d41f8d23ac61b36e545d17b743884a2fb4e6326d343f8c7787eefbf6426ed0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.075605 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.091733 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.115879 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c69f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ad310fd-a52d-4270-9403-4b40769c580e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a49b2c39f1cc1a7db05c81f40463d9ef008f1f3338ffb498ed27d1ab582e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb762832c6558e48af6c115a19a071f6008239dfdb847c92617daf95c543446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8769dc2ed1b229d9c546e57baddc9c1fa62b0d1fc0b8fe185688406a1a0bdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48221d987a8524e5dc6513cb28ea037af7708d995148c788eb1e7a10de79065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f475b9ad7f7e657b63da35976d593bef0ca42f1de2e40127399bd76bdca2862e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71b04eb8c62833ca8cf0f3d2d173ad5bec9cf8ba6b86026f9f700f54a5be7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05e25a6f3fd64dbb48ffab1eb70a052c7a60b622445c8ae80e24e7634c2edd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c69f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.140892 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce526e40-8920-4d1a-adfe-a7149eed9a11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f481970df5e94c25bb3bf8bf0314256f0c17af311e61921dbdd225a333a19b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t7xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mnfsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.141657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.141792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.141811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.141845 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.141859 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.161705 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a46587-3f32-40de-a806-a33918e9d29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31d67e677c3d17fa6bfd7a16463b8691c235a2762a7bac7695ec02dc97f44212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953cc7d82f6bb1e470c17303e69864924c50b301caba380c3523c0614609f214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77czl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb776\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.182947 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6d7a-8c74-45ec-8830-65d533bb19c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcc9d8960b96dcc7424b7e331a7aa2fc770ee716c53dde88e9e30bdaadf436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81950fc9b6884ab686254f561ce203d652c7e8aa974c063c2301c3636cb20a42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823268d4e9002acdc9689e0df39e4eb95f64458c1ecdd57b5d7431f8f572ee64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c265b930a01c722f4914b4951d817de0e5d408a4467c5532c904df2056857ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b7fc57bfe1b0d716e5a31a475266b7780800b4b60ae136f3266b2b001c6075\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764018448\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764018448\\\\\\\\\\\\\\\" (2025-11-24 20:07:28 +0000 UTC to 2026-11-24 20:07:28 +0000 UTC (now=2025-11-24 21:07:28.805782707 +0000 UTC))\\\\\\\"\\\\nI1124 21:07:28.805852 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 21:07:28.805951 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 21:07:28.815522 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 21:07:28.815685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1209588969/tls.crt::/tmp/serving-cert-1209588969/tls.key\\\\\\\"\\\\nI1124 21:07:28.815810 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 21:07:28.815844 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 21:07:28.815863 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 21:07:28.815869 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 21:07:28.815936 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 21:07:28.815941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF1124 21:07:28.816286 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e564791e0e5505c99f81a3af3a4aee2bb2d3fd6918ce28baf10866f7f6bb9b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f6aeb21c341517e8c6359fe4d6b09b6809db3507d9ec3e5388d12db4a1363dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.205072 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8ef56b8-bc58-4a92-9a2e-6dd95d67d555\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3f81dfd2d94f60ed8d55c1d572e229bec74b8d6b459261c2e5f85a82013a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7cdc4a7c263111c42f4e75dd963c420418c51b79f848185967c2a8fcb74eff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a1778ef8844187b98d619d03f770fc05b434d6221d9b53dee33d65171e7d76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3282c780dba13e631148d232b21ec5de0bda071ea44421c78664e07abadb92f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.226529 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7135932fc400c6ddad1de327dad01782d24eae762ed97658501a9b270f6fa7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.245590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.245652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.245664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.245685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.245698 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.249483 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gdjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f348c59-5453-436a-bcce-548bdef22a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:18Z\\\",\\\"message\\\":\\\"2025-11-24T21:07:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123\\\\n2025-11-24T21:07:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4263931-6103-4428-a6a9-187e1c6ed123 to /host/opt/cni/bin/\\\\n2025-11-24T21:07:33Z [verbose] multus-daemon started\\\\n2025-11-24T21:07:33Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:08:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wphqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gdjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.272654 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6757adc4-e0f2-49a6-8320-29cb96e4a10f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:08:26Z\\\",\\\"message\\\":\\\"r removal\\\\nI1124 21:08:26.899623 6807 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:08:26.899661 6807 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:08:26.899681 6807 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:08:26.899730 6807 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:08:26.899769 6807 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:08:26.899767 6807 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 21:08:26.899790 6807 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 21:08:26.899799 6807 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:08:26.899824 6807 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:08:26.899831 6807 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:08:26.899871 6807 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:08:26.899840 6807 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:08:26.899887 6807 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:08:26.899851 6807 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:08:26.899912 6807 factory.go:656] Stopping watch factory\\\\nI1124 21:08:26.899949 6807 ovnkube.go:599] Stopped ovnkube\\\\nI1124 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:08:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnxzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrqff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.285639 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqbj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f40abc6-b686-4090-b952-f36cbf4fb47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72929f240d3807296eded0dc6c8b2e9d38df323ee28a3b8d7e92bdfcb6451733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2l4ww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqbj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.300524 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-llnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3434122b-ad4c-40f8-89fc-8829fd158ae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5875t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-llnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.312873 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d939e4-5cf6-4b28-ad4d-c46ea4d86ada\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ec6ef7d8d2efabfe6bd731629fbb025413066b9cd65538100000fdc2bf877f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d3104c832131fbbbb2a8023e12f19f5528f0304e15f148a5c53d862de0f6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927cdd5d69e1419bf706a2c5a97fb74435d157cbc9771d32f38734f18530a8e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d996214bed8af6da471e8363e1911230a4e702a01c495f7b51ddb52b659fb02e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.327608 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.343343 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bee1d1a-543f-41a3-893d-c1089e00f6e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae37346c8bae8b96761493a414a2cbf06a1a68d95adee9c7580a4866fe34c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e455e08f1a5c84dacae789a715a4f08bcc672c206b3c046f7d29ff1c533fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e455e08f1a5c84dacae789a715a4f08bcc672c206b3c046f7d29ff1c533fa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.350994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.351061 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.351080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.351108 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.351143 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.367350 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9508a32a643f1aa1ef68e56a9cd061fa40998f6ea6010bb53e2a357f77fbb16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440f8d8b477475328b972e889b05dfe82969dacd4c3d2c469db38dbd53b9f851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.380309 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16bb3f85bef71bb171bff006f5a8fabf5a8a2a7314e26ad4efeef4e7b58d5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.390384 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5rck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e27b0cc-3de6-48f9-9b49-89c0ba6264df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9301fafe39b9ff81fb114350e781de1cd6249fe61602b5969511fd8d3e6b568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lrh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:07:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5rck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:08:29Z is after 2025-08-24T17:21:41Z" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.454540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.454643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.454666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.454692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.454714 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.557923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.557975 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.557985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.558007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.558022 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.661801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.661841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.661851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.661869 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.661879 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.663880 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:29 crc kubenswrapper[4801]: E1124 21:08:29.664017 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.664618 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:29 crc kubenswrapper[4801]: E1124 21:08:29.664706 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.765031 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.765074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.765083 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.765104 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.765116 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.868888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.868943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.868953 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.868973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.868984 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.972617 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.972703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.972728 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.972764 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:29 crc kubenswrapper[4801]: I1124 21:08:29.972792 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:29Z","lastTransitionTime":"2025-11-24T21:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.076451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.076508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.076556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.076585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.076605 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.179670 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.179739 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.179757 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.179788 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.179808 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.282878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.282949 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.282974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.283006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.283028 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.386286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.386406 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.386432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.386461 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.386481 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.489264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.489319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.489343 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.489405 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.489431 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.592887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.592990 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.593014 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.593044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.593065 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.663597 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.663753 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:30 crc kubenswrapper[4801]: E1124 21:08:30.663805 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:30 crc kubenswrapper[4801]: E1124 21:08:30.664017 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.696836 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.696891 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.696903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.696923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.696938 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.800050 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.800125 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.800143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.800169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.800186 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.903737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.903799 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.903816 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.903842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:30 crc kubenswrapper[4801]: I1124 21:08:30.903862 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:30Z","lastTransitionTime":"2025-11-24T21:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.006613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.006662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.006674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.006693 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.006704 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.110466 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.110531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.110549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.110576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.110599 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.213656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.213728 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.213742 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.213761 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.213775 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.317227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.317392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.317411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.317460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.317474 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.420562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.420636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.420662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.420694 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.420716 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.524086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.524149 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.524167 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.524194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.524215 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.627578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.627642 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.627662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.627688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.627710 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.663276 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.663311 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:31 crc kubenswrapper[4801]: E1124 21:08:31.663500 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:31 crc kubenswrapper[4801]: E1124 21:08:31.663715 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.730334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.730444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.730454 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.730473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.730485 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.834283 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.834340 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.834350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.834389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.834400 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.937209 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.937288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.937310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.937342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:31 crc kubenswrapper[4801]: I1124 21:08:31.937394 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:31Z","lastTransitionTime":"2025-11-24T21:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.041078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.041157 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.041179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.041207 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.041225 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.144748 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.144805 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.144821 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.144848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.144867 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.248631 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.248715 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.248741 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.248773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.248796 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.351282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.351332 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.351344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.351384 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.351398 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.454626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.454674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.454684 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.454702 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.454713 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.532238 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.532495 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.532463575 +0000 UTC m=+148.615050245 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.532598 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.532649 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.532792 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.532875 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.532857536 +0000 UTC m=+148.615444206 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.532930 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.533099 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.533048152 +0000 UTC m=+148.615634822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.557305 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.557350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.557374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.557394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.557409 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.633951 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.634059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634241 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634256 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634332 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634408 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634267 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634459 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634557 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.634491962 +0000 UTC m=+148.717078672 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.634595 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.634581515 +0000 UTC m=+148.717168225 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.660652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.660729 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.660748 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.660775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.660796 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.663087 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.663142 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.663265 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:32 crc kubenswrapper[4801]: E1124 21:08:32.663411 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.763715 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.763767 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.763782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.763802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.763821 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.867222 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.867292 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.867306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.867331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.867347 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.970537 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.970620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.970639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.970663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:32 crc kubenswrapper[4801]: I1124 21:08:32.970680 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:32Z","lastTransitionTime":"2025-11-24T21:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.075043 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.075130 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.075153 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.075184 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.075214 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.179469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.179531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.179552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.179583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.179603 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.282922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.282995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.283015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.283044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.283065 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.386027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.386109 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.386129 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.386159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.386179 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.490295 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.490389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.490410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.490440 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.490461 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.593637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.593704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.593717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.593736 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.593750 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.663504 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.663607 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:33 crc kubenswrapper[4801]: E1124 21:08:33.663732 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:33 crc kubenswrapper[4801]: E1124 21:08:33.663864 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.696643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.696716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.696739 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.696768 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.696795 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.800194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.800252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.800264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.800280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.800294 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.902918 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.902981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.902993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.903010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:33 crc kubenswrapper[4801]: I1124 21:08:33.903022 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:33Z","lastTransitionTime":"2025-11-24T21:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.005782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.005828 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.005839 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.005858 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.005871 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.109212 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.109281 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.109300 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.109330 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.109353 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.212262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.212825 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.212844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.212873 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.212891 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.316909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.316972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.316988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.317012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.317030 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.420343 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.420456 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.420481 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.420513 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.420539 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.524121 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.524224 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.524237 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.524253 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.524263 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.627871 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.627930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.627951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.627977 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.627998 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.663116 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.663557 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:34 crc kubenswrapper[4801]: E1124 21:08:34.663720 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:34 crc kubenswrapper[4801]: E1124 21:08:34.664002 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.731118 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.731180 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.731197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.731228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.731246 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.834748 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.834806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.834831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.834859 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.834881 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.937699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.937744 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.937760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.937782 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:34 crc kubenswrapper[4801]: I1124 21:08:34.937800 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:34Z","lastTransitionTime":"2025-11-24T21:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.041171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.041220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.041236 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.041258 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.041275 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.144862 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.145250 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.145445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.145711 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.145918 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.250265 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.250334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.250356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.250416 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.250439 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.353022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.353080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.353099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.353123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.353143 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.455792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.455889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.455908 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.455934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.455951 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.558879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.558947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.558967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.558993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.559013 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.661893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.661973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.661998 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.662028 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.662048 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.663102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.663173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:35 crc kubenswrapper[4801]: E1124 21:08:35.663233 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:35 crc kubenswrapper[4801]: E1124 21:08:35.663334 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.765100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.765193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.765222 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.765271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.765301 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.867773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.867830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.867847 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.867870 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.867887 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.971316 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.971356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.971382 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.971396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:35 crc kubenswrapper[4801]: I1124 21:08:35.971406 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:35Z","lastTransitionTime":"2025-11-24T21:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.074842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.074898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.074915 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.074938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.074955 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.177976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.178045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.178064 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.178095 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.178115 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.281760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.281827 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.281840 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.281867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.281883 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.385105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.385189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.385202 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.385251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.385265 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.488871 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.488921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.488931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.488948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.488958 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.593430 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.593523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.593542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.593567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.593584 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.663325 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.663341 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:36 crc kubenswrapper[4801]: E1124 21:08:36.663588 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:36 crc kubenswrapper[4801]: E1124 21:08:36.664670 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.697009 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.697080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.697097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.697126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.697143 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.800823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.800878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.800887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.800909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.800922 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.904566 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.904627 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.904640 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.904663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:36 crc kubenswrapper[4801]: I1124 21:08:36.904675 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:36Z","lastTransitionTime":"2025-11-24T21:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.008190 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.008246 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.008264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.008290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.008309 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:37Z","lastTransitionTime":"2025-11-24T21:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.111143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.111192 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.111202 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.111219 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.111229 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:37Z","lastTransitionTime":"2025-11-24T21:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.215154 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.215224 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.215240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.215273 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.215288 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:37Z","lastTransitionTime":"2025-11-24T21:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.319044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.319114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.319125 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.319148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.319162 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:37Z","lastTransitionTime":"2025-11-24T21:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.422275 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.422333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.422346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.422389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.422408 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:37Z","lastTransitionTime":"2025-11-24T21:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.487122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.487201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.487228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.487260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.487285 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:08:37Z","lastTransitionTime":"2025-11-24T21:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.552854 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29"] Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.553416 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.557142 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.557203 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.561031 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.564030 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.592734 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58c623d9-9e11-42aa-9ae9-f84af1c69819-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.592779 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58c623d9-9e11-42aa-9ae9-f84af1c69819-service-ca\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.592846 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58c623d9-9e11-42aa-9ae9-f84af1c69819-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.592883 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/58c623d9-9e11-42aa-9ae9-f84af1c69819-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.592953 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/58c623d9-9e11-42aa-9ae9-f84af1c69819-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.628895 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.628873822 podStartE2EDuration="1m9.628873822s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.612436641 +0000 UTC m=+89.695023311" watchObservedRunningTime="2025-11-24 21:08:37.628873822 +0000 UTC m=+89.711460492" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.663349 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.663478 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:37 crc kubenswrapper[4801]: E1124 21:08:37.663523 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:37 crc kubenswrapper[4801]: E1124 21:08:37.663711 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.673234 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7c69f" podStartSLOduration=68.67320297 podStartE2EDuration="1m8.67320297s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.672779887 +0000 UTC m=+89.755366567" watchObservedRunningTime="2025-11-24 21:08:37.67320297 +0000 UTC m=+89.755789640" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.689782 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podStartSLOduration=68.689748364 podStartE2EDuration="1m8.689748364s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.689356004 +0000 UTC m=+89.771942674" watchObservedRunningTime="2025-11-24 21:08:37.689748364 +0000 UTC m=+89.772335074" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694174 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58c623d9-9e11-42aa-9ae9-f84af1c69819-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694221 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/58c623d9-9e11-42aa-9ae9-f84af1c69819-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/58c623d9-9e11-42aa-9ae9-f84af1c69819-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58c623d9-9e11-42aa-9ae9-f84af1c69819-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694356 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58c623d9-9e11-42aa-9ae9-f84af1c69819-service-ca\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694447 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/58c623d9-9e11-42aa-9ae9-f84af1c69819-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.694472 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/58c623d9-9e11-42aa-9ae9-f84af1c69819-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.695614 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58c623d9-9e11-42aa-9ae9-f84af1c69819-service-ca\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.704456 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58c623d9-9e11-42aa-9ae9-f84af1c69819-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.706563 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb776" podStartSLOduration=68.706546586 podStartE2EDuration="1m8.706546586s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.706393191 +0000 UTC m=+89.788979861" watchObservedRunningTime="2025-11-24 21:08:37.706546586 +0000 UTC m=+89.789133256" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.713769 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58c623d9-9e11-42aa-9ae9-f84af1c69819-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-clb29\" (UID: \"58c623d9-9e11-42aa-9ae9-f84af1c69819\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.730961 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.73092034 podStartE2EDuration="1m8.73092034s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.729826688 +0000 UTC m=+89.812413358" watchObservedRunningTime="2025-11-24 21:08:37.73092034 +0000 UTC m=+89.813507010" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.750914 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.750891395 podStartE2EDuration="1m5.750891395s" podCreationTimestamp="2025-11-24 21:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.750402551 +0000 UTC m=+89.832989211" watchObservedRunningTime="2025-11-24 21:08:37.750891395 +0000 UTC m=+89.833478065" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.790012 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gdjvp" podStartSLOduration=68.789978389 podStartE2EDuration="1m8.789978389s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.789885157 +0000 UTC m=+89.872471827" watchObservedRunningTime="2025-11-24 21:08:37.789978389 +0000 UTC m=+89.872565059" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.854752 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nqbj8" podStartSLOduration=69.854727685 podStartE2EDuration="1m9.854727685s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.840751575 +0000 UTC m=+89.923338255" watchObservedRunningTime="2025-11-24 21:08:37.854727685 +0000 UTC m=+89.937314365" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.883963 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.886664 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=33.886647239 podStartE2EDuration="33.886647239s" podCreationTimestamp="2025-11-24 21:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.870880408 +0000 UTC m=+89.953467088" watchObservedRunningTime="2025-11-24 21:08:37.886647239 +0000 UTC m=+89.969233919" Nov 24 21:08:37 crc kubenswrapper[4801]: W1124 21:08:37.901957 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58c623d9_9e11_42aa_9ae9_f84af1c69819.slice/crio-ac329fb88a719dd3da15d3b195c292163b1528e40c80a03001c4e14664a252ce WatchSource:0}: Error finding container ac329fb88a719dd3da15d3b195c292163b1528e40c80a03001c4e14664a252ce: Status 404 returned error can't find the container with id ac329fb88a719dd3da15d3b195c292163b1528e40c80a03001c4e14664a252ce Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.931477 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.931447891 podStartE2EDuration="9.931447891s" podCreationTimestamp="2025-11-24 21:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.928774383 +0000 UTC m=+90.011361053" watchObservedRunningTime="2025-11-24 21:08:37.931447891 +0000 UTC m=+90.014034561" Nov 24 21:08:37 crc kubenswrapper[4801]: I1124 21:08:37.975545 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w5rck" podStartSLOduration=69.975521211 podStartE2EDuration="1m9.975521211s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:37.975422028 +0000 UTC m=+90.058008778" watchObservedRunningTime="2025-11-24 21:08:37.975521211 +0000 UTC m=+90.058107891" Nov 24 21:08:38 crc kubenswrapper[4801]: I1124 21:08:38.663464 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:38 crc kubenswrapper[4801]: I1124 21:08:38.663474 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:38 crc kubenswrapper[4801]: E1124 21:08:38.666192 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:38 crc kubenswrapper[4801]: E1124 21:08:38.666436 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:38 crc kubenswrapper[4801]: I1124 21:08:38.821139 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" event={"ID":"58c623d9-9e11-42aa-9ae9-f84af1c69819","Type":"ContainerStarted","Data":"33ca59195c378df07dcb2307f8e35ca04206c13fd7047ca63c946d0e8a879789"} Nov 24 21:08:38 crc kubenswrapper[4801]: I1124 21:08:38.821473 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" event={"ID":"58c623d9-9e11-42aa-9ae9-f84af1c69819","Type":"ContainerStarted","Data":"ac329fb88a719dd3da15d3b195c292163b1528e40c80a03001c4e14664a252ce"} Nov 24 21:08:38 crc kubenswrapper[4801]: I1124 21:08:38.845111 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-clb29" podStartSLOduration=70.84507977 podStartE2EDuration="1m10.84507977s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:08:38.843126512 +0000 UTC m=+90.925713302" watchObservedRunningTime="2025-11-24 21:08:38.84507977 +0000 UTC m=+90.927666480" Nov 24 21:08:39 crc kubenswrapper[4801]: I1124 21:08:39.663994 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:39 crc kubenswrapper[4801]: I1124 21:08:39.664024 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:39 crc kubenswrapper[4801]: E1124 21:08:39.664280 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:39 crc kubenswrapper[4801]: E1124 21:08:39.664479 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:40 crc kubenswrapper[4801]: I1124 21:08:40.663772 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:40 crc kubenswrapper[4801]: I1124 21:08:40.663811 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:40 crc kubenswrapper[4801]: E1124 21:08:40.665312 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:40 crc kubenswrapper[4801]: E1124 21:08:40.665494 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:41 crc kubenswrapper[4801]: I1124 21:08:41.663105 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:41 crc kubenswrapper[4801]: I1124 21:08:41.663241 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:41 crc kubenswrapper[4801]: E1124 21:08:41.664287 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:41 crc kubenswrapper[4801]: I1124 21:08:41.664710 4801 scope.go:117] "RemoveContainer" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" Nov 24 21:08:41 crc kubenswrapper[4801]: E1124 21:08:41.664926 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:41 crc kubenswrapper[4801]: E1124 21:08:41.664705 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:42 crc kubenswrapper[4801]: I1124 21:08:42.663025 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:42 crc kubenswrapper[4801]: I1124 21:08:42.663127 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:42 crc kubenswrapper[4801]: E1124 21:08:42.663170 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:42 crc kubenswrapper[4801]: E1124 21:08:42.663256 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:43 crc kubenswrapper[4801]: I1124 21:08:43.662846 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:43 crc kubenswrapper[4801]: I1124 21:08:43.662846 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:43 crc kubenswrapper[4801]: E1124 21:08:43.663125 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:43 crc kubenswrapper[4801]: E1124 21:08:43.662998 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:44 crc kubenswrapper[4801]: I1124 21:08:44.663015 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:44 crc kubenswrapper[4801]: E1124 21:08:44.663593 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:44 crc kubenswrapper[4801]: I1124 21:08:44.663077 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:44 crc kubenswrapper[4801]: E1124 21:08:44.663834 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:45 crc kubenswrapper[4801]: I1124 21:08:45.662990 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:45 crc kubenswrapper[4801]: I1124 21:08:45.663090 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:45 crc kubenswrapper[4801]: E1124 21:08:45.663218 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:45 crc kubenswrapper[4801]: E1124 21:08:45.663342 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:46 crc kubenswrapper[4801]: I1124 21:08:46.663724 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:46 crc kubenswrapper[4801]: I1124 21:08:46.663769 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:46 crc kubenswrapper[4801]: E1124 21:08:46.663917 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:46 crc kubenswrapper[4801]: E1124 21:08:46.664075 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:47 crc kubenswrapper[4801]: I1124 21:08:47.663051 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:47 crc kubenswrapper[4801]: I1124 21:08:47.663085 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:47 crc kubenswrapper[4801]: E1124 21:08:47.663241 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:47 crc kubenswrapper[4801]: E1124 21:08:47.663441 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:47 crc kubenswrapper[4801]: I1124 21:08:47.826042 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:47 crc kubenswrapper[4801]: E1124 21:08:47.826202 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:08:47 crc kubenswrapper[4801]: E1124 21:08:47.826269 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs podName:3434122b-ad4c-40f8-89fc-8829fd158ae3 nodeName:}" failed. No retries permitted until 2025-11-24 21:09:51.826252184 +0000 UTC m=+163.908838864 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs") pod "network-metrics-daemon-llnf4" (UID: "3434122b-ad4c-40f8-89fc-8829fd158ae3") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:08:48 crc kubenswrapper[4801]: I1124 21:08:48.663605 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:48 crc kubenswrapper[4801]: I1124 21:08:48.663766 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:48 crc kubenswrapper[4801]: E1124 21:08:48.665514 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:48 crc kubenswrapper[4801]: E1124 21:08:48.665690 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:49 crc kubenswrapper[4801]: I1124 21:08:49.663091 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:49 crc kubenswrapper[4801]: I1124 21:08:49.663166 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:49 crc kubenswrapper[4801]: E1124 21:08:49.663287 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:49 crc kubenswrapper[4801]: E1124 21:08:49.663420 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:50 crc kubenswrapper[4801]: I1124 21:08:50.663301 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:50 crc kubenswrapper[4801]: I1124 21:08:50.663430 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:50 crc kubenswrapper[4801]: E1124 21:08:50.663571 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:50 crc kubenswrapper[4801]: E1124 21:08:50.663829 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:51 crc kubenswrapper[4801]: I1124 21:08:51.662876 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:51 crc kubenswrapper[4801]: E1124 21:08:51.663066 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:51 crc kubenswrapper[4801]: I1124 21:08:51.663329 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:51 crc kubenswrapper[4801]: E1124 21:08:51.663463 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:52 crc kubenswrapper[4801]: I1124 21:08:52.663905 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:52 crc kubenswrapper[4801]: I1124 21:08:52.664031 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:52 crc kubenswrapper[4801]: E1124 21:08:52.664129 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:52 crc kubenswrapper[4801]: E1124 21:08:52.664272 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:53 crc kubenswrapper[4801]: I1124 21:08:53.663391 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:53 crc kubenswrapper[4801]: I1124 21:08:53.663417 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:53 crc kubenswrapper[4801]: E1124 21:08:53.663919 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:53 crc kubenswrapper[4801]: E1124 21:08:53.664455 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:54 crc kubenswrapper[4801]: I1124 21:08:54.663700 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:54 crc kubenswrapper[4801]: I1124 21:08:54.663737 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:54 crc kubenswrapper[4801]: E1124 21:08:54.663904 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:54 crc kubenswrapper[4801]: E1124 21:08:54.664275 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:55 crc kubenswrapper[4801]: I1124 21:08:55.662846 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:55 crc kubenswrapper[4801]: I1124 21:08:55.662911 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:55 crc kubenswrapper[4801]: E1124 21:08:55.663739 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:55 crc kubenswrapper[4801]: E1124 21:08:55.663858 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:55 crc kubenswrapper[4801]: I1124 21:08:55.665126 4801 scope.go:117] "RemoveContainer" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" Nov 24 21:08:55 crc kubenswrapper[4801]: E1124 21:08:55.665523 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrqff_openshift-ovn-kubernetes(6757adc4-e0f2-49a6-8320-29cb96e4a10f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" Nov 24 21:08:56 crc kubenswrapper[4801]: I1124 21:08:56.663217 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:56 crc kubenswrapper[4801]: I1124 21:08:56.663256 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:56 crc kubenswrapper[4801]: E1124 21:08:56.663579 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:56 crc kubenswrapper[4801]: E1124 21:08:56.663726 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:57 crc kubenswrapper[4801]: I1124 21:08:57.663583 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:57 crc kubenswrapper[4801]: I1124 21:08:57.663608 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:57 crc kubenswrapper[4801]: E1124 21:08:57.663818 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:08:57 crc kubenswrapper[4801]: E1124 21:08:57.664247 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:58 crc kubenswrapper[4801]: I1124 21:08:58.663149 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:08:58 crc kubenswrapper[4801]: I1124 21:08:58.663167 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:08:58 crc kubenswrapper[4801]: E1124 21:08:58.665289 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:08:58 crc kubenswrapper[4801]: E1124 21:08:58.665432 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:08:59 crc kubenswrapper[4801]: I1124 21:08:59.663164 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:08:59 crc kubenswrapper[4801]: I1124 21:08:59.663181 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:08:59 crc kubenswrapper[4801]: E1124 21:08:59.663419 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:08:59 crc kubenswrapper[4801]: E1124 21:08:59.663540 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:00 crc kubenswrapper[4801]: I1124 21:09:00.663729 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:00 crc kubenswrapper[4801]: E1124 21:09:00.663898 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:00 crc kubenswrapper[4801]: I1124 21:09:00.664184 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:00 crc kubenswrapper[4801]: E1124 21:09:00.664271 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:01 crc kubenswrapper[4801]: I1124 21:09:01.663470 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:01 crc kubenswrapper[4801]: I1124 21:09:01.663762 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:01 crc kubenswrapper[4801]: E1124 21:09:01.663849 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:01 crc kubenswrapper[4801]: E1124 21:09:01.664071 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:02 crc kubenswrapper[4801]: I1124 21:09:02.663892 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:02 crc kubenswrapper[4801]: E1124 21:09:02.664160 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:02 crc kubenswrapper[4801]: I1124 21:09:02.664706 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:02 crc kubenswrapper[4801]: E1124 21:09:02.664834 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:03 crc kubenswrapper[4801]: I1124 21:09:03.663035 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:03 crc kubenswrapper[4801]: I1124 21:09:03.663135 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:03 crc kubenswrapper[4801]: E1124 21:09:03.663287 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:03 crc kubenswrapper[4801]: E1124 21:09:03.663548 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.663832 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.664034 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:04 crc kubenswrapper[4801]: E1124 21:09:04.664315 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:04 crc kubenswrapper[4801]: E1124 21:09:04.664637 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.925534 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/1.log" Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.926170 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/0.log" Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.926268 4801 generic.go:334] "Generic (PLEG): container finished" podID="5f348c59-5453-436a-bcce-548bdef22a27" containerID="cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e" exitCode=1 Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.926315 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerDied","Data":"cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e"} Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.926402 4801 scope.go:117] "RemoveContainer" containerID="31b2daf661d1e6186e09c172a27a58419cf505a2db815279305a6b6310d98981" Nov 24 21:09:04 crc kubenswrapper[4801]: I1124 21:09:04.926841 4801 scope.go:117] "RemoveContainer" containerID="cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e" Nov 24 21:09:04 crc kubenswrapper[4801]: E1124 21:09:04.926990 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gdjvp_openshift-multus(5f348c59-5453-436a-bcce-548bdef22a27)\"" pod="openshift-multus/multus-gdjvp" podUID="5f348c59-5453-436a-bcce-548bdef22a27" Nov 24 21:09:05 crc kubenswrapper[4801]: I1124 21:09:05.663212 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:05 crc kubenswrapper[4801]: I1124 21:09:05.663242 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:05 crc kubenswrapper[4801]: E1124 21:09:05.663999 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:05 crc kubenswrapper[4801]: E1124 21:09:05.664088 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:05 crc kubenswrapper[4801]: I1124 21:09:05.933059 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/1.log" Nov 24 21:09:06 crc kubenswrapper[4801]: I1124 21:09:06.663433 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:06 crc kubenswrapper[4801]: I1124 21:09:06.663481 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:06 crc kubenswrapper[4801]: E1124 21:09:06.663652 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:06 crc kubenswrapper[4801]: E1124 21:09:06.664016 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.663927 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.663932 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:07 crc kubenswrapper[4801]: E1124 21:09:07.664157 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:07 crc kubenswrapper[4801]: E1124 21:09:07.664353 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.665543 4801 scope.go:117] "RemoveContainer" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.941785 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/3.log" Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.945150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerStarted","Data":"624b913e9b3d3311007ee1800920070fc6168d102b51d6a2cf31ad81fad53194"} Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.946519 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:09:07 crc kubenswrapper[4801]: I1124 21:09:07.996981 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podStartSLOduration=98.996945205 podStartE2EDuration="1m38.996945205s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:07.995580854 +0000 UTC m=+120.078167534" watchObservedRunningTime="2025-11-24 21:09:07.996945205 +0000 UTC m=+120.079531905" Nov 24 21:09:08 crc kubenswrapper[4801]: I1124 21:09:08.593120 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-llnf4"] Nov 24 21:09:08 crc kubenswrapper[4801]: I1124 21:09:08.593406 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:08 crc kubenswrapper[4801]: E1124 21:09:08.593605 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:08 crc kubenswrapper[4801]: E1124 21:09:08.637891 4801 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 21:09:08 crc kubenswrapper[4801]: I1124 21:09:08.663482 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:08 crc kubenswrapper[4801]: E1124 21:09:08.664732 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:08 crc kubenswrapper[4801]: I1124 21:09:08.664768 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:08 crc kubenswrapper[4801]: E1124 21:09:08.664941 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:08 crc kubenswrapper[4801]: E1124 21:09:08.789039 4801 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:09:09 crc kubenswrapper[4801]: I1124 21:09:09.663176 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:09 crc kubenswrapper[4801]: E1124 21:09:09.663675 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:10 crc kubenswrapper[4801]: I1124 21:09:10.662915 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:10 crc kubenswrapper[4801]: I1124 21:09:10.662988 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:10 crc kubenswrapper[4801]: I1124 21:09:10.663018 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:10 crc kubenswrapper[4801]: E1124 21:09:10.663126 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:10 crc kubenswrapper[4801]: E1124 21:09:10.663233 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:10 crc kubenswrapper[4801]: E1124 21:09:10.663350 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:11 crc kubenswrapper[4801]: I1124 21:09:11.663205 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:11 crc kubenswrapper[4801]: E1124 21:09:11.663428 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:12 crc kubenswrapper[4801]: I1124 21:09:12.663237 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:12 crc kubenswrapper[4801]: I1124 21:09:12.663283 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:12 crc kubenswrapper[4801]: E1124 21:09:12.663462 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:12 crc kubenswrapper[4801]: I1124 21:09:12.663475 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:12 crc kubenswrapper[4801]: E1124 21:09:12.663652 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:12 crc kubenswrapper[4801]: E1124 21:09:12.663736 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:13 crc kubenswrapper[4801]: I1124 21:09:13.663313 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:13 crc kubenswrapper[4801]: E1124 21:09:13.663529 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:13 crc kubenswrapper[4801]: E1124 21:09:13.791023 4801 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:09:14 crc kubenswrapper[4801]: I1124 21:09:14.662959 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:14 crc kubenswrapper[4801]: E1124 21:09:14.663166 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:14 crc kubenswrapper[4801]: I1124 21:09:14.663290 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:14 crc kubenswrapper[4801]: E1124 21:09:14.663551 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:14 crc kubenswrapper[4801]: I1124 21:09:14.664431 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:14 crc kubenswrapper[4801]: E1124 21:09:14.664834 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:15 crc kubenswrapper[4801]: I1124 21:09:15.663808 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:15 crc kubenswrapper[4801]: E1124 21:09:15.663997 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:16 crc kubenswrapper[4801]: I1124 21:09:16.663913 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:16 crc kubenswrapper[4801]: E1124 21:09:16.664168 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:16 crc kubenswrapper[4801]: I1124 21:09:16.664619 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:16 crc kubenswrapper[4801]: E1124 21:09:16.664755 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:16 crc kubenswrapper[4801]: I1124 21:09:16.665132 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:16 crc kubenswrapper[4801]: E1124 21:09:16.665246 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:17 crc kubenswrapper[4801]: I1124 21:09:17.663187 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:17 crc kubenswrapper[4801]: E1124 21:09:17.663361 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:18 crc kubenswrapper[4801]: I1124 21:09:18.663684 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:18 crc kubenswrapper[4801]: I1124 21:09:18.663717 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:18 crc kubenswrapper[4801]: I1124 21:09:18.663867 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:18 crc kubenswrapper[4801]: E1124 21:09:18.666206 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:18 crc kubenswrapper[4801]: E1124 21:09:18.666353 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:18 crc kubenswrapper[4801]: E1124 21:09:18.666622 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:18 crc kubenswrapper[4801]: E1124 21:09:18.791964 4801 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:09:19 crc kubenswrapper[4801]: I1124 21:09:19.524148 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:09:19 crc kubenswrapper[4801]: I1124 21:09:19.663463 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:19 crc kubenswrapper[4801]: E1124 21:09:19.663785 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:19 crc kubenswrapper[4801]: I1124 21:09:19.664003 4801 scope.go:117] "RemoveContainer" containerID="cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e" Nov 24 21:09:19 crc kubenswrapper[4801]: I1124 21:09:19.997124 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/1.log" Nov 24 21:09:19 crc kubenswrapper[4801]: I1124 21:09:19.997178 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerStarted","Data":"bded6813a42903d93faa0cd462730b1d6b0fb08b0c64c2aa6280298df277b53a"} Nov 24 21:09:20 crc kubenswrapper[4801]: I1124 21:09:20.663986 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:20 crc kubenswrapper[4801]: I1124 21:09:20.664049 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:20 crc kubenswrapper[4801]: E1124 21:09:20.664449 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:20 crc kubenswrapper[4801]: E1124 21:09:20.665011 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:20 crc kubenswrapper[4801]: I1124 21:09:20.665463 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:20 crc kubenswrapper[4801]: E1124 21:09:20.665634 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:21 crc kubenswrapper[4801]: I1124 21:09:21.663486 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:21 crc kubenswrapper[4801]: E1124 21:09:21.663684 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:22 crc kubenswrapper[4801]: I1124 21:09:22.663838 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:22 crc kubenswrapper[4801]: E1124 21:09:22.664033 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:09:22 crc kubenswrapper[4801]: I1124 21:09:22.664123 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:22 crc kubenswrapper[4801]: I1124 21:09:22.664253 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:22 crc kubenswrapper[4801]: E1124 21:09:22.664399 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:09:22 crc kubenswrapper[4801]: E1124 21:09:22.664590 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-llnf4" podUID="3434122b-ad4c-40f8-89fc-8829fd158ae3" Nov 24 21:09:23 crc kubenswrapper[4801]: I1124 21:09:23.663328 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:23 crc kubenswrapper[4801]: E1124 21:09:23.663767 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.663470 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.663595 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.663461 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.667955 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.668876 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.669975 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.671212 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.673869 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 21:09:24 crc kubenswrapper[4801]: I1124 21:09:24.676624 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 21:09:25 crc kubenswrapper[4801]: I1124 21:09:25.663661 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.212983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.272237 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-flzk9"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.273282 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.279267 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.280322 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.280657 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.280771 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.282084 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.286520 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.288038 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.288426 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.288733 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.291774 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.291849 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-config\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.291902 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.291938 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-client-ca\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.291983 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.291991 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-config\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.293301 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bhg\" (UniqueName: \"kubernetes.io/projected/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-kube-api-access-n9bhg\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.293356 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrgc\" (UniqueName: \"kubernetes.io/projected/63c79716-9856-4a80-bc3e-8d016e1bfc97-kube-api-access-lmrgc\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.293453 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c79716-9856-4a80-bc3e-8d016e1bfc97-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.293492 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.293496 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-serving-cert\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.294174 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx28l\" (UniqueName: \"kubernetes.io/projected/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-kube-api-access-jx28l\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.294236 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-client-ca\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.294280 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.294855 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wd6s2"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.297246 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.297694 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.298345 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.298524 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.298875 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.304516 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.306406 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.309437 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.311845 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.312261 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sp8cw"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.313917 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.315567 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.316701 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.317454 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.317905 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.343168 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.343655 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.343761 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p42zd"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.343851 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.343885 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.344049 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.344599 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.344916 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.345207 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.345393 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.345556 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.345722 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.345882 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.346033 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.346176 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.346355 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.346672 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.346812 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.347054 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.347202 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.347510 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.347947 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348067 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348158 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348300 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348404 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348540 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348577 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348760 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348890 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348916 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348998 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.349055 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.348774 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.349257 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.349696 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.349869 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.351775 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.351940 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jjg49"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.352184 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.352661 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.352836 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gt2ln"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.353072 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.353216 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.353240 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.353404 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.353485 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.355636 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.358281 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.360943 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.361144 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.361631 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.362354 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.363077 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-flzk9"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.364647 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.364713 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.364973 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.365158 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.366057 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.366099 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.366433 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.367220 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.367314 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.367389 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.367455 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.367452 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.367516 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.368461 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.371695 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-24thm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.371829 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.372056 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.372229 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.372336 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7wsqf"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.372664 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.372769 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.380736 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.380912 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.390324 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.390616 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.390858 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.391498 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.397106 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.397974 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.399040 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.399188 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.400969 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.403408 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-default-certificate\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.415180 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.415245 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrvb\" (UniqueName: \"kubernetes.io/projected/6e8b3d74-50eb-4b81-a035-fa24854747ab-kube-api-access-btrvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.400662 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.404867 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.415545 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.404937 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405006 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405082 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405151 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405247 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405322 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.415594 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-config\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405386 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.416022 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-stats-auth\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.416057 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405420 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.416826 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.417246 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-62t9t"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.417739 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.418524 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.418649 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-config\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.418768 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66xq\" (UniqueName: \"kubernetes.io/projected/9b7b0134-906e-4ab8-8535-8f33f6879cb8-kube-api-access-l66xq\") pod \"cluster-samples-operator-665b6dd947-cgbf4\" (UID: \"9b7b0134-906e-4ab8-8535-8f33f6879cb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.418804 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24sk6\" (UniqueName: \"kubernetes.io/projected/e62f3661-a867-4120-8ee5-79f7c76cedaa-kube-api-access-24sk6\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.418841 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405469 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405580 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405827 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.405858 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.406046 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419507 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.418838 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b3d74-50eb-4b81-a035-fa24854747ab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419707 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-machine-approver-tls\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458f9b65-7105-44c8-9322-5045d0087cc0-service-ca-bundle\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419751 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/841392cb-04d4-4f8a-93ba-dc7f3abf589b-serving-cert\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419784 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419804 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e62f3661-a867-4120-8ee5-79f7c76cedaa-serving-cert\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419839 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419857 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-serving-cert\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419877 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.419898 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-serving-cert\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420032 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgf2\" (UniqueName: \"kubernetes.io/projected/ca285374-4829-4b65-835c-df5877019e4c-kube-api-access-csgf2\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420307 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420342 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b7b0134-906e-4ab8-8535-8f33f6879cb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgbf4\" (UID: \"9b7b0134-906e-4ab8-8535-8f33f6879cb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420380 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskv7\" (UniqueName: \"kubernetes.io/projected/458f9b65-7105-44c8-9322-5045d0087cc0-kube-api-access-fskv7\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420412 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841392cb-04d4-4f8a-93ba-dc7f3abf589b-config\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420462 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841392cb-04d4-4f8a-93ba-dc7f3abf589b-trusted-ca\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420516 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-config\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420585 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.420745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-config\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.421009 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-client-ca\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.421980 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-client-ca\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.422232 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca285374-4829-4b65-835c-df5877019e4c-serving-cert\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.422281 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmsrh\" (UniqueName: \"kubernetes.io/projected/841392cb-04d4-4f8a-93ba-dc7f3abf589b-kube-api-access-gmsrh\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.422331 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-config\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.423923 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.424089 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-config\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.425188 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.422354 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8zbv\" (UniqueName: \"kubernetes.io/projected/66361a77-fd52-4c44-bc62-9df560348e1b-kube-api-access-w8zbv\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.435222 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9bhg\" (UniqueName: \"kubernetes.io/projected/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-kube-api-access-n9bhg\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.435276 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrgc\" (UniqueName: \"kubernetes.io/projected/63c79716-9856-4a80-bc3e-8d016e1bfc97-kube-api-access-lmrgc\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.435307 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-auth-proxy-config\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.435329 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.435632 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.435648 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66361a77-fd52-4c44-bc62-9df560348e1b-audit-dir\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436241 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c79716-9856-4a80-bc3e-8d016e1bfc97-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-images\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436288 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xsbt9"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436314 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436339 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-client\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9pk\" (UniqueName: \"kubernetes.io/projected/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-kube-api-access-9q9pk\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436409 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8b3d74-50eb-4b81-a035-fa24854747ab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-serving-cert\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436452 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd837d07-9682-448e-85c3-f29f598b9441-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436470 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g2cg\" (UniqueName: \"kubernetes.io/projected/f363b84a-05e1-4787-97fe-7b5d24def92d-kube-api-access-2g2cg\") pod \"dns-operator-744455d44c-24thm\" (UID: \"f363b84a-05e1-4787-97fe-7b5d24def92d\") " pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436496 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-encryption-config\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436522 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx28l\" (UniqueName: \"kubernetes.io/projected/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-kube-api-access-jx28l\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436541 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436592 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-config\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436625 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vms\" (UniqueName: \"kubernetes.io/projected/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-kube-api-access-z8vms\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436652 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-etcd-client\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd837d07-9682-448e-85c3-f29f598b9441-config\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd837d07-9682-448e-85c3-f29f598b9441-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-client-ca\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436724 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436746 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxzc\" (UniqueName: \"kubernetes.io/projected/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-kube-api-access-bwxzc\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436763 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-metrics-certs\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436779 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-service-ca\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436802 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdpwj\" (UniqueName: \"kubernetes.io/projected/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-kube-api-access-tdpwj\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436825 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-ca\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436843 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-audit-policies\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436861 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-config\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436885 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-service-ca-bundle\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436907 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436923 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f363b84a-05e1-4787-97fe-7b5d24def92d-metrics-tls\") pod \"dns-operator-744455d44c-24thm\" (UID: \"f363b84a-05e1-4787-97fe-7b5d24def92d\") " pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.436955 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.437201 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.437281 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.441036 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.441539 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.441095 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.441127 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.441202 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.441385 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.442157 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-config\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.449045 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-client-ca\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.454315 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.458475 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4jjjc"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.461422 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c79716-9856-4a80-bc3e-8d016e1bfc97-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.461759 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.461944 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.463768 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.465246 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.465611 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.465717 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.465759 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.465886 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-serving-cert\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.466453 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.466685 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.470810 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.472190 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.473306 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.473713 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.473910 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.474485 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.474969 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.476091 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.477006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.477233 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.478294 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ck7jb"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.478853 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.479859 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wtj6n"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.480270 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.481307 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wd6s2"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.481805 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.482896 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-khn9r"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.483872 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.484210 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kl9g5"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.485603 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sp8cw"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.485680 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.489354 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mpfrd"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.489727 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.490110 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ssn55"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.490490 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.490864 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.491707 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.492167 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.493585 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.494568 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l2sjb"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.495060 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.498741 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.501289 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.501789 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.501914 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.504727 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p42zd"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.505390 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jjg49"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.507175 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.516590 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gt2ln"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.516682 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.516701 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.527715 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.529626 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.532691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrgc\" (UniqueName: \"kubernetes.io/projected/63c79716-9856-4a80-bc3e-8d016e1bfc97-kube-api-access-lmrgc\") pod \"route-controller-manager-6576b87f9c-tbkxm\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.539852 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-serving-cert\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.544436 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.544558 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-serving-cert\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.544594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgf2\" (UniqueName: \"kubernetes.io/projected/ca285374-4829-4b65-835c-df5877019e4c-kube-api-access-csgf2\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.544672 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.543423 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xsbt9"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.544799 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fskv7\" (UniqueName: \"kubernetes.io/projected/458f9b65-7105-44c8-9322-5045d0087cc0-kube-api-access-fskv7\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545146 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841392cb-04d4-4f8a-93ba-dc7f3abf589b-config\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545341 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841392cb-04d4-4f8a-93ba-dc7f3abf589b-trusted-ca\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545480 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b7b0134-906e-4ab8-8535-8f33f6879cb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgbf4\" (UID: \"9b7b0134-906e-4ab8-8535-8f33f6879cb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545596 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-config\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545689 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca285374-4829-4b65-835c-df5877019e4c-serving-cert\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545943 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsrh\" (UniqueName: \"kubernetes.io/projected/841392cb-04d4-4f8a-93ba-dc7f3abf589b-kube-api-access-gmsrh\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546061 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8zbv\" (UniqueName: \"kubernetes.io/projected/66361a77-fd52-4c44-bc62-9df560348e1b-kube-api-access-w8zbv\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546399 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-auth-proxy-config\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546447 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546507 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66361a77-fd52-4c44-bc62-9df560348e1b-audit-dir\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546523 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841392cb-04d4-4f8a-93ba-dc7f3abf589b-config\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546528 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-serving-cert\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546546 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-profile-collector-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.546674 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1368b67f-4d82-435c-8afd-b1c16727f118-proxy-tls\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.547804 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-auth-proxy-config\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.548023 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66361a77-fd52-4c44-bc62-9df560348e1b-audit-dir\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.548073 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-images\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.548170 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.548205 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-client\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.545462 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.549327 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-config\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556192 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9pk\" (UniqueName: \"kubernetes.io/projected/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-kube-api-access-9q9pk\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556260 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8b3d74-50eb-4b81-a035-fa24854747ab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd837d07-9682-448e-85c3-f29f598b9441-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556304 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g2cg\" (UniqueName: \"kubernetes.io/projected/f363b84a-05e1-4787-97fe-7b5d24def92d-kube-api-access-2g2cg\") pod \"dns-operator-744455d44c-24thm\" (UID: \"f363b84a-05e1-4787-97fe-7b5d24def92d\") " pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556336 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5hs\" (UniqueName: \"kubernetes.io/projected/1368b67f-4d82-435c-8afd-b1c16727f118-kube-api-access-hq5hs\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556857 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-encryption-config\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556901 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.556931 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-config\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.557271 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.557628 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841392cb-04d4-4f8a-93ba-dc7f3abf589b-trusted-ca\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.558113 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.558229 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-images\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.558594 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-serving-cert\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.558817 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vms\" (UniqueName: \"kubernetes.io/projected/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-kube-api-access-z8vms\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559208 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1368b67f-4d82-435c-8afd-b1c16727f118-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559309 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6833ad-116b-4620-a27b-59271899cf0c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t4jd\" (UID: \"1e6833ad-116b-4620-a27b-59271899cf0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559479 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-apiservice-cert\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559556 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-etcd-client\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559643 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd837d07-9682-448e-85c3-f29f598b9441-config\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559706 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-config\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559720 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd837d07-9682-448e-85c3-f29f598b9441-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559816 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-srv-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559845 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzg4m\" (UniqueName: \"kubernetes.io/projected/dac2b280-3c5e-43ff-9b5e-b46040ca4904-kube-api-access-mzg4m\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-tmpfs\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559903 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxzc\" (UniqueName: \"kubernetes.io/projected/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-kube-api-access-bwxzc\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559925 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-metrics-certs\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559951 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-service-ca\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559984 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdpwj\" (UniqueName: \"kubernetes.io/projected/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-kube-api-access-tdpwj\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.560012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-ca\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.560048 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-audit-policies\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.560076 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-config\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.560103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlpj8\" (UniqueName: \"kubernetes.io/projected/1e6833ad-116b-4620-a27b-59271899cf0c-kube-api-access-vlpj8\") pod \"package-server-manager-789f6589d5-9t4jd\" (UID: \"1e6833ad-116b-4620-a27b-59271899cf0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.560135 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-service-ca-bundle\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559260 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-24thm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.558913 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b7b0134-906e-4ab8-8535-8f33f6879cb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgbf4\" (UID: \"9b7b0134-906e-4ab8-8535-8f33f6879cb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559144 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-client\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.559571 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8b3d74-50eb-4b81-a035-fa24854747ab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.561180 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-service-ca\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.561505 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd837d07-9682-448e-85c3-f29f598b9441-config\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.561930 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-audit-policies\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562057 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562357 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e62f3661-a867-4120-8ee5-79f7c76cedaa-etcd-ca\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562590 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-service-ca-bundle\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562701 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562743 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f363b84a-05e1-4787-97fe-7b5d24def92d-metrics-tls\") pod \"dns-operator-744455d44c-24thm\" (UID: \"f363b84a-05e1-4787-97fe-7b5d24def92d\") " pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562778 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.562987 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-default-certificate\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btrvb\" (UniqueName: \"kubernetes.io/projected/6e8b3d74-50eb-4b81-a035-fa24854747ab-kube-api-access-btrvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563041 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcp8p\" (UniqueName: \"kubernetes.io/projected/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-kube-api-access-gcp8p\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563074 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563103 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-config\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563126 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-webhook-cert\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563153 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563203 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-config\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-stats-auth\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563247 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24sk6\" (UniqueName: \"kubernetes.io/projected/e62f3661-a867-4120-8ee5-79f7c76cedaa-kube-api-access-24sk6\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563267 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b3d74-50eb-4b81-a035-fa24854747ab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563302 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66xq\" (UniqueName: \"kubernetes.io/projected/9b7b0134-906e-4ab8-8535-8f33f6879cb8-kube-api-access-l66xq\") pod \"cluster-samples-operator-665b6dd947-cgbf4\" (UID: \"9b7b0134-906e-4ab8-8535-8f33f6879cb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-machine-approver-tls\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563344 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458f9b65-7105-44c8-9322-5045d0087cc0-service-ca-bundle\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563375 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/841392cb-04d4-4f8a-93ba-dc7f3abf589b-serving-cert\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563399 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563417 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e62f3661-a867-4120-8ee5-79f7c76cedaa-serving-cert\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563438 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-etcd-client\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563456 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563710 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca285374-4829-4b65-835c-df5877019e4c-serving-cert\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.563985 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca285374-4829-4b65-835c-df5877019e4c-config\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.564303 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd837d07-9682-448e-85c3-f29f598b9441-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.565083 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-metrics-certs\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.565313 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc9p8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.565391 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9bhg\" (UniqueName: \"kubernetes.io/projected/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-kube-api-access-n9bhg\") pod \"controller-manager-879f6c89f-flzk9\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.566143 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66361a77-fd52-4c44-bc62-9df560348e1b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.566225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458f9b65-7105-44c8-9322-5045d0087cc0-service-ca-bundle\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.566678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-config\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.566800 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.567165 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.567447 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-config\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.567821 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-x7kmk"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.568538 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f363b84a-05e1-4787-97fe-7b5d24def92d-metrics-tls\") pod \"dns-operator-744455d44c-24thm\" (UID: \"f363b84a-05e1-4787-97fe-7b5d24def92d\") " pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.568601 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-stats-auth\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.568611 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b3d74-50eb-4b81-a035-fa24854747ab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.569172 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.569327 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-machine-approver-tls\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.569649 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.569874 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kl9g5"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.569987 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66361a77-fd52-4c44-bc62-9df560348e1b-encryption-config\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.570948 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.571859 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/458f9b65-7105-44c8-9322-5045d0087cc0-default-certificate\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.572125 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.573337 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.573947 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/841392cb-04d4-4f8a-93ba-dc7f3abf589b-serving-cert\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.574356 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e62f3661-a867-4120-8ee5-79f7c76cedaa-serving-cert\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.577455 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.580205 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l2sjb"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.580409 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4jjjc"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.582145 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-khn9r"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.582929 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.584245 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wtj6n"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.585419 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.586597 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.587266 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx28l\" (UniqueName: \"kubernetes.io/projected/58cb28d2-5586-4eda-a6a8-0a5b3b494e41-kube-api-access-jx28l\") pod \"openshift-apiserver-operator-796bbdcf4f-lxmnm\" (UID: \"58cb28d2-5586-4eda-a6a8-0a5b3b494e41\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.587827 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc9p8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.589064 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-62t9t"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.590209 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mpfrd"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.591311 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.592411 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.593578 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ck7jb"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.594774 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ssn55"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.594976 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.595942 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.597044 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x7kmk"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.598126 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rz5j7"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.598986 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.615914 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.616012 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.636489 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.648815 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.657533 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.664864 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670338 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-profile-collector-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1368b67f-4d82-435c-8afd-b1c16727f118-proxy-tls\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670488 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq5hs\" (UniqueName: \"kubernetes.io/projected/1368b67f-4d82-435c-8afd-b1c16727f118-kube-api-access-hq5hs\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670548 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1368b67f-4d82-435c-8afd-b1c16727f118-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670583 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6833ad-116b-4620-a27b-59271899cf0c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t4jd\" (UID: \"1e6833ad-116b-4620-a27b-59271899cf0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670610 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-apiservice-cert\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670645 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-srv-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670667 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzg4m\" (UniqueName: \"kubernetes.io/projected/dac2b280-3c5e-43ff-9b5e-b46040ca4904-kube-api-access-mzg4m\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670690 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-tmpfs\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.670804 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlpj8\" (UniqueName: \"kubernetes.io/projected/1e6833ad-116b-4620-a27b-59271899cf0c-kube-api-access-vlpj8\") pod \"package-server-manager-789f6589d5-9t4jd\" (UID: \"1e6833ad-116b-4620-a27b-59271899cf0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.671717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcp8p\" (UniqueName: \"kubernetes.io/projected/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-kube-api-access-gcp8p\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.671893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1368b67f-4d82-435c-8afd-b1c16727f118-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.672082 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-tmpfs\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.673020 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-webhook-cert\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.677160 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.696451 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.716248 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.748590 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.758268 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.776287 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.796955 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.817760 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.845194 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.856028 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-flzk9"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.856772 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.869038 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.876506 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: W1124 21:09:28.877594 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c6c44d_dd82_4fb9_99e4_ab1f584909f4.slice/crio-8c0ff659b1faa7b3cbf616f42f0b02a8129f502e20bd250c9abe250bb6af7704 WatchSource:0}: Error finding container 8c0ff659b1faa7b3cbf616f42f0b02a8129f502e20bd250c9abe250bb6af7704: Status 404 returned error can't find the container with id 8c0ff659b1faa7b3cbf616f42f0b02a8129f502e20bd250c9abe250bb6af7704 Nov 24 21:09:28 crc kubenswrapper[4801]: W1124 21:09:28.877908 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63c79716_9856_4a80_bc3e_8d016e1bfc97.slice/crio-af34ce972fe038d580380d5e33eba331f5f3181f4af6c6d5710c8d3b3207e204 WatchSource:0}: Error finding container af34ce972fe038d580380d5e33eba331f5f3181f4af6c6d5710c8d3b3207e204: Status 404 returned error can't find the container with id af34ce972fe038d580380d5e33eba331f5f3181f4af6c6d5710c8d3b3207e204 Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.893665 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm"] Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.896336 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.915764 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 21:09:28 crc kubenswrapper[4801]: W1124 21:09:28.923330 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58cb28d2_5586_4eda_a6a8_0a5b3b494e41.slice/crio-351f2006377c6549f9b2a46886d8887aaa347e167094d143364b0f0a242bee52 WatchSource:0}: Error finding container 351f2006377c6549f9b2a46886d8887aaa347e167094d143364b0f0a242bee52: Status 404 returned error can't find the container with id 351f2006377c6549f9b2a46886d8887aaa347e167094d143364b0f0a242bee52 Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.936305 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.955934 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.976784 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 21:09:28 crc kubenswrapper[4801]: I1124 21:09:28.996529 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.016840 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.035140 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.049831 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" event={"ID":"58cb28d2-5586-4eda-a6a8-0a5b3b494e41","Type":"ContainerStarted","Data":"351f2006377c6549f9b2a46886d8887aaa347e167094d143364b0f0a242bee52"} Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.051319 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" event={"ID":"63c79716-9856-4a80-bc3e-8d016e1bfc97","Type":"ContainerStarted","Data":"af34ce972fe038d580380d5e33eba331f5f3181f4af6c6d5710c8d3b3207e204"} Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.052649 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" event={"ID":"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4","Type":"ContainerStarted","Data":"8c0ff659b1faa7b3cbf616f42f0b02a8129f502e20bd250c9abe250bb6af7704"} Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.071243 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.083043 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.096499 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.116041 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.149687 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.155951 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.177272 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.195758 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.217396 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.236803 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.257088 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.276518 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.334536 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.334670 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.337114 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.349617 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-webhook-cert\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.350612 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6833ad-116b-4620-a27b-59271899cf0c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t4jd\" (UID: \"1e6833ad-116b-4620-a27b-59271899cf0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.351182 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-apiservice-cert\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.357659 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.364706 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-profile-collector-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.377814 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.396600 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.416817 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.437312 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.456496 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.466964 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1368b67f-4d82-435c-8afd-b1c16727f118-proxy-tls\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.477979 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.494520 4801 request.go:700] Waited for 1.017260986s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.496932 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.517007 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.535895 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.557272 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.576672 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.596840 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.617163 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.638034 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.657033 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: E1124 21:09:29.671216 4801 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 21:09:29 crc kubenswrapper[4801]: E1124 21:09:29.671440 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-srv-cert podName:dac2b280-3c5e-43ff-9b5e-b46040ca4904 nodeName:}" failed. No retries permitted until 2025-11-24 21:09:30.171418762 +0000 UTC m=+142.254005432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-srv-cert") pod "catalog-operator-68c6474976-8r2wf" (UID: "dac2b280-3c5e-43ff-9b5e-b46040ca4904") : failed to sync secret cache: timed out waiting for the condition Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.676518 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.696779 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.732320 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.736428 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.756145 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.776166 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.795764 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.815691 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.837497 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.856083 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.888561 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.896559 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.916933 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.937495 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.956998 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.976914 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 21:09:29 crc kubenswrapper[4801]: I1124 21:09:29.997066 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.017409 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.040278 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.057727 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.058168 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" event={"ID":"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4","Type":"ContainerStarted","Data":"4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d"} Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.058420 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.061055 4801 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-flzk9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.061137 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" podUID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.061332 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" event={"ID":"58cb28d2-5586-4eda-a6a8-0a5b3b494e41","Type":"ContainerStarted","Data":"c5c8584a400d64fb0efec682b0898c65b34a22ab382112ff1d1d9dd9a4de7e92"} Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.062694 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" event={"ID":"63c79716-9856-4a80-bc3e-8d016e1bfc97","Type":"ContainerStarted","Data":"a5b5775549f8356f4ccd1091965809faa45022b5d9f6611f12d0e777c007f709"} Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.063381 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.076659 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.095900 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.117761 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.136880 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.157149 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.176981 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.197394 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.198114 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-srv-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.202295 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dac2b280-3c5e-43ff-9b5e-b46040ca4904-srv-cert\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.261709 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskv7\" (UniqueName: \"kubernetes.io/projected/458f9b65-7105-44c8-9322-5045d0087cc0-kube-api-access-fskv7\") pod \"router-default-5444994796-7wsqf\" (UID: \"458f9b65-7105-44c8-9322-5045d0087cc0\") " pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.276109 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.286796 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8zbv\" (UniqueName: \"kubernetes.io/projected/66361a77-fd52-4c44-bc62-9df560348e1b-kube-api-access-w8zbv\") pod \"apiserver-7bbb656c7d-cg997\" (UID: \"66361a77-fd52-4c44-bc62-9df560348e1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.292581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmsrh\" (UniqueName: \"kubernetes.io/projected/841392cb-04d4-4f8a-93ba-dc7f3abf589b-kube-api-access-gmsrh\") pod \"console-operator-58897d9998-jjg49\" (UID: \"841392cb-04d4-4f8a-93ba-dc7f3abf589b\") " pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.310350 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgf2\" (UniqueName: \"kubernetes.io/projected/ca285374-4829-4b65-835c-df5877019e4c-kube-api-access-csgf2\") pod \"authentication-operator-69f744f599-wd6s2\" (UID: \"ca285374-4829-4b65-835c-df5877019e4c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.318935 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.341922 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9pk\" (UniqueName: \"kubernetes.io/projected/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-kube-api-access-9q9pk\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.355909 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g2cg\" (UniqueName: \"kubernetes.io/projected/f363b84a-05e1-4787-97fe-7b5d24def92d-kube-api-access-2g2cg\") pod \"dns-operator-744455d44c-24thm\" (UID: \"f363b84a-05e1-4787-97fe-7b5d24def92d\") " pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.387124 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-24thm" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.401400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vms\" (UniqueName: \"kubernetes.io/projected/cdb2d2a4-a011-4a8d-988a-d24b129ab9f0-kube-api-access-z8vms\") pod \"openshift-config-operator-7777fb866f-p42zd\" (UID: \"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.402604 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:30 crc kubenswrapper[4801]: W1124 21:09:30.418483 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458f9b65_7105_44c8_9322_5045d0087cc0.slice/crio-a8743aed1cbdbe64612c02a50ef1077496e1842376464f7701ed2f4e4d87e756 WatchSource:0}: Error finding container a8743aed1cbdbe64612c02a50ef1077496e1842376464f7701ed2f4e4d87e756: Status 404 returned error can't find the container with id a8743aed1cbdbe64612c02a50ef1077496e1842376464f7701ed2f4e4d87e756 Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.422018 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd837d07-9682-448e-85c3-f29f598b9441-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s6zhb\" (UID: \"cd837d07-9682-448e-85c3-f29f598b9441\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.426323 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b3bb22d-e640-430f-8e3b-4a15d1aa6070-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-drpf8\" (UID: \"4b3bb22d-e640-430f-8e3b-4a15d1aa6070\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.461131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxzc\" (UniqueName: \"kubernetes.io/projected/f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c-kube-api-access-bwxzc\") pod \"machine-api-operator-5694c8668f-sp8cw\" (UID: \"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.474844 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.477622 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdpwj\" (UniqueName: \"kubernetes.io/projected/46ac6500-8082-4d0e-9cb2-3c4fc5f44621-kube-api-access-tdpwj\") pod \"machine-approver-56656f9798-gwwj9\" (UID: \"46ac6500-8082-4d0e-9cb2-3c4fc5f44621\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.500351 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.500851 4801 request.go:700] Waited for 1.93663531s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.506972 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.512697 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.519380 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24sk6\" (UniqueName: \"kubernetes.io/projected/e62f3661-a867-4120-8ee5-79f7c76cedaa-kube-api-access-24sk6\") pod \"etcd-operator-b45778765-gt2ln\" (UID: \"e62f3661-a867-4120-8ee5-79f7c76cedaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.532273 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66xq\" (UniqueName: \"kubernetes.io/projected/9b7b0134-906e-4ab8-8535-8f33f6879cb8-kube-api-access-l66xq\") pod \"cluster-samples-operator-665b6dd947-cgbf4\" (UID: \"9b7b0134-906e-4ab8-8535-8f33f6879cb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.558849 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.559635 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btrvb\" (UniqueName: \"kubernetes.io/projected/6e8b3d74-50eb-4b81-a035-fa24854747ab-kube-api-access-btrvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-mfjxs\" (UID: \"6e8b3d74-50eb-4b81-a035-fa24854747ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.562843 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wr8kc\" (UID: \"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.574085 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.582876 4801 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.589386 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.600236 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.601828 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.616487 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.631693 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.638439 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.657962 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.667080 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.685668 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.691822 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.693599 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.702929 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jjg49"] Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.702942 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.721907 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.737525 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.810926 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq5hs\" (UniqueName: \"kubernetes.io/projected/1368b67f-4d82-435c-8afd-b1c16727f118-kube-api-access-hq5hs\") pod \"machine-config-controller-84d6567774-z5rv4\" (UID: \"1368b67f-4d82-435c-8afd-b1c16727f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.830202 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-24thm"] Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.845877 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.853134 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzg4m\" (UniqueName: \"kubernetes.io/projected/dac2b280-3c5e-43ff-9b5e-b46040ca4904-kube-api-access-mzg4m\") pod \"catalog-operator-68c6474976-8r2wf\" (UID: \"dac2b280-3c5e-43ff-9b5e-b46040ca4904\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.865838 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlpj8\" (UniqueName: \"kubernetes.io/projected/1e6833ad-116b-4620-a27b-59271899cf0c-kube-api-access-vlpj8\") pod \"package-server-manager-789f6589d5-9t4jd\" (UID: \"1e6833ad-116b-4620-a27b-59271899cf0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.871476 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcp8p\" (UniqueName: \"kubernetes.io/projected/1d18ec0f-c3fb-43dc-99a7-e896cf2789a8-kube-api-access-gcp8p\") pod \"packageserver-d55dfcdfc-j2gwl\" (UID: \"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927569 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927604 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927641 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/360559c7-6d20-4c26-9cfd-3c82af2df553-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnp5m\" (UID: \"360559c7-6d20-4c26-9cfd-3c82af2df553\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-proxy-tls\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927718 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vfqd\" (UniqueName: \"kubernetes.io/projected/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-kube-api-access-6vfqd\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-policies\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927749 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927764 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/089da0b2-23b8-431e-9033-255cfbf12d3a-signing-key\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927783 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927804 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-certificates\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927818 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-metrics-tls\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927834 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8c2382-6e5d-49cb-9028-43b797a70879-config-volume\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927873 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm6mw\" (UniqueName: \"kubernetes.io/projected/6994c699-1333-48ce-a5cc-62ce628e3497-kube-api-access-lm6mw\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927894 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927915 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad527301-9226-42f3-a4e2-e10fde60c564-audit-dir\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927934 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-serving-cert\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.927990 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928007 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928024 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-config\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928059 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtzb\" (UniqueName: \"kubernetes.io/projected/72eec86c-1cfa-4d98-861d-6a745c62cd32-kube-api-access-7gtzb\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928091 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/433b7c07-9a1f-4eb1-a82c-2d822e4af130-srv-cert\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928107 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvlfv\" (UniqueName: \"kubernetes.io/projected/360559c7-6d20-4c26-9cfd-3c82af2df553-kube-api-access-vvlfv\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnp5m\" (UID: \"360559c7-6d20-4c26-9cfd-3c82af2df553\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928123 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928141 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928156 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbj4f\" (UniqueName: \"kubernetes.io/projected/702dc398-86c5-4d2b-bc25-e8464ed36961-kube-api-access-tbj4f\") pod \"downloads-7954f5f757-mpfrd\" (UID: \"702dc398-86c5-4d2b-bc25-e8464ed36961\") " pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72eec86c-1cfa-4d98-861d-6a745c62cd32-config-volume\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928215 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2mz\" (UniqueName: \"kubernetes.io/projected/5ff5a85a-5770-4521-b3a2-8f38e340bcfd-kube-api-access-zj2mz\") pod \"multus-admission-controller-857f4d67dd-ck7jb\" (UID: \"5ff5a85a-5770-4521-b3a2-8f38e340bcfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928232 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928268 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928295 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-images\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928310 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-dir\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928326 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-tls\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928344 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnjh6\" (UniqueName: \"kubernetes.io/projected/433b7c07-9a1f-4eb1-a82c-2d822e4af130-kube-api-access-lnjh6\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928359 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928416 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/433b7c07-9a1f-4eb1-a82c-2d822e4af130-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928452 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk74s\" (UniqueName: \"kubernetes.io/projected/ad527301-9226-42f3-a4e2-e10fde60c564-kube-api-access-xk74s\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928467 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lbrg\" (UniqueName: \"kubernetes.io/projected/12387df0-31c7-4315-b960-ec5ff2e629c6-kube-api-access-6lbrg\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928495 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72eec86c-1cfa-4d98-861d-6a745c62cd32-metrics-tls\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928510 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bq9b\" (UniqueName: \"kubernetes.io/projected/ac8c2382-6e5d-49cb-9028-43b797a70879-kube-api-access-6bq9b\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928562 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5ff5a85a-5770-4521-b3a2-8f38e340bcfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ck7jb\" (UID: \"5ff5a85a-5770-4521-b3a2-8f38e340bcfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928624 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cszwp\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-kube-api-access-cszwp\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928639 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-audit\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928655 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-encryption-config\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928694 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-serving-cert\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928716 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-config\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928731 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jxl\" (UniqueName: \"kubernetes.io/projected/089da0b2-23b8-431e-9033-255cfbf12d3a-kube-api-access-q9jxl\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928765 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928779 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-oauth-config\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928794 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-trusted-ca-bundle\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928819 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928836 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928853 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-bound-sa-token\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928868 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928886 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/089da0b2-23b8-431e-9033-255cfbf12d3a-signing-cabundle\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928904 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhdr\" (UniqueName: \"kubernetes.io/projected/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-kube-api-access-vfhdr\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928923 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-service-ca\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928939 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928975 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-oauth-serving-cert\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.928991 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-etcd-client\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.929009 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jd7z\" (UniqueName: \"kubernetes.io/projected/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-kube-api-access-5jd7z\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.929026 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26pl\" (UniqueName: \"kubernetes.io/projected/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-kube-api-access-z26pl\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.929111 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-trusted-ca\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931379 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931448 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931468 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-serving-cert\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931485 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-etcd-serving-ca\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931531 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkgth\" (UniqueName: \"kubernetes.io/projected/b4e7907d-1917-4617-b922-ca338cab51b3-kube-api-access-wkgth\") pod \"migrator-59844c95c7-2njc8\" (UID: \"b4e7907d-1917-4617-b922-ca338cab51b3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931548 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931589 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-trusted-ca\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931607 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-console-config\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931625 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931642 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8c2382-6e5d-49cb-9028-43b797a70879-secret-volume\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931669 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48n8b\" (UniqueName: \"kubernetes.io/projected/38662694-befb-4e2c-9a82-e0bc5ae846db-kube-api-access-48n8b\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931688 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ad527301-9226-42f3-a4e2-e10fde60c564-node-pullsecrets\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931704 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-image-import-ca\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931744 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.931762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:30 crc kubenswrapper[4801]: E1124 21:09:30.935951 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:31.435937972 +0000 UTC m=+143.518524642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:30 crc kubenswrapper[4801]: I1124 21:09:30.947641 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.013491 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036144 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036351 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-config\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036409 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtzb\" (UniqueName: \"kubernetes.io/projected/72eec86c-1cfa-4d98-861d-6a745c62cd32-kube-api-access-7gtzb\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036428 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036454 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/433b7c07-9a1f-4eb1-a82c-2d822e4af130-srv-cert\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036473 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvlfv\" (UniqueName: \"kubernetes.io/projected/360559c7-6d20-4c26-9cfd-3c82af2df553-kube-api-access-vvlfv\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnp5m\" (UID: \"360559c7-6d20-4c26-9cfd-3c82af2df553\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036489 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036507 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036525 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbj4f\" (UniqueName: \"kubernetes.io/projected/702dc398-86c5-4d2b-bc25-e8464ed36961-kube-api-access-tbj4f\") pod \"downloads-7954f5f757-mpfrd\" (UID: \"702dc398-86c5-4d2b-bc25-e8464ed36961\") " pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036550 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72eec86c-1cfa-4d98-861d-6a745c62cd32-config-volume\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036583 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2mz\" (UniqueName: \"kubernetes.io/projected/5ff5a85a-5770-4521-b3a2-8f38e340bcfd-kube-api-access-zj2mz\") pod \"multus-admission-controller-857f4d67dd-ck7jb\" (UID: \"5ff5a85a-5770-4521-b3a2-8f38e340bcfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036608 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036639 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-images\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036656 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036677 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-tls\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036692 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnjh6\" (UniqueName: \"kubernetes.io/projected/433b7c07-9a1f-4eb1-a82c-2d822e4af130-kube-api-access-lnjh6\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036712 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-dir\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036744 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036769 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/433b7c07-9a1f-4eb1-a82c-2d822e4af130-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036796 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk74s\" (UniqueName: \"kubernetes.io/projected/ad527301-9226-42f3-a4e2-e10fde60c564-kube-api-access-xk74s\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036818 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lbrg\" (UniqueName: \"kubernetes.io/projected/12387df0-31c7-4315-b960-ec5ff2e629c6-kube-api-access-6lbrg\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036860 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72eec86c-1cfa-4d98-861d-6a745c62cd32-metrics-tls\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bq9b\" (UniqueName: \"kubernetes.io/projected/ac8c2382-6e5d-49cb-9028-43b797a70879-kube-api-access-6bq9b\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036921 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5ff5a85a-5770-4521-b3a2-8f38e340bcfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ck7jb\" (UID: \"5ff5a85a-5770-4521-b3a2-8f38e340bcfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036968 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cszwp\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-kube-api-access-cszwp\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.036991 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-audit\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037014 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-encryption-config\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037046 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-csi-data-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037076 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-serving-cert\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037114 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnmh\" (UniqueName: \"kubernetes.io/projected/90e9ba98-83c9-424e-bba8-ddef1a9354cf-kube-api-access-ftnmh\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037177 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9jxl\" (UniqueName: \"kubernetes.io/projected/089da0b2-23b8-431e-9033-255cfbf12d3a-kube-api-access-q9jxl\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037204 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037229 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-config\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-oauth-config\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037283 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-trusted-ca-bundle\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037311 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037340 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037395 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e8d9784-6ac1-4393-affb-75f88d7e622b-certs\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037455 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/089da0b2-23b8-431e-9033-255cfbf12d3a-signing-cabundle\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037488 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpv2t\" (UniqueName: \"kubernetes.io/projected/6bea707a-7e50-41ad-bdf5-fd09bb6bd713-kube-api-access-mpv2t\") pod \"ingress-canary-x7kmk\" (UID: \"6bea707a-7e50-41ad-bdf5-fd09bb6bd713\") " pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037515 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-bound-sa-token\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037535 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhdr\" (UniqueName: \"kubernetes.io/projected/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-kube-api-access-vfhdr\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037551 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037574 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-service-ca\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037597 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-oauth-serving-cert\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037620 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-etcd-client\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037666 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jd7z\" (UniqueName: \"kubernetes.io/projected/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-kube-api-access-5jd7z\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-trusted-ca\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037720 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037746 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z26pl\" (UniqueName: \"kubernetes.io/projected/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-kube-api-access-z26pl\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-serving-cert\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037802 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-etcd-serving-ca\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037822 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-socket-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037849 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037872 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-registration-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037898 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkgth\" (UniqueName: \"kubernetes.io/projected/b4e7907d-1917-4617-b922-ca338cab51b3-kube-api-access-wkgth\") pod \"migrator-59844c95c7-2njc8\" (UID: \"b4e7907d-1917-4617-b922-ca338cab51b3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037922 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-trusted-ca\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037945 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-console-config\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037969 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.037994 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8c2382-6e5d-49cb-9028-43b797a70879-secret-volume\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038015 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48n8b\" (UniqueName: \"kubernetes.io/projected/38662694-befb-4e2c-9a82-e0bc5ae846db-kube-api-access-48n8b\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ad527301-9226-42f3-a4e2-e10fde60c564-node-pullsecrets\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038079 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-image-import-ca\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.038154 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:31.538125934 +0000 UTC m=+143.620712604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038179 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038205 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038226 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bea707a-7e50-41ad-bdf5-fd09bb6bd713-cert\") pod \"ingress-canary-x7kmk\" (UID: \"6bea707a-7e50-41ad-bdf5-fd09bb6bd713\") " pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038256 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/360559c7-6d20-4c26-9cfd-3c82af2df553-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnp5m\" (UID: \"360559c7-6d20-4c26-9cfd-3c82af2df553\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038331 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vfqd\" (UniqueName: \"kubernetes.io/projected/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-kube-api-access-6vfqd\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038353 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-policies\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038390 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-mountpoint-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038421 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038442 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-proxy-tls\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038461 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/089da0b2-23b8-431e-9033-255cfbf12d3a-signing-key\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038480 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038499 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-plugins-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038520 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-metrics-tls\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038542 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8c2382-6e5d-49cb-9028-43b797a70879-config-volume\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038562 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-certificates\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm6mw\" (UniqueName: \"kubernetes.io/projected/6994c699-1333-48ce-a5cc-62ce628e3497-kube-api-access-lm6mw\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038634 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038651 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad527301-9226-42f3-a4e2-e10fde60c564-audit-dir\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038669 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-serving-cert\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038701 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhj6m\" (UniqueName: \"kubernetes.io/projected/3e8d9784-6ac1-4393-affb-75f88d7e622b-kube-api-access-fhj6m\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038720 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038737 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.038753 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e8d9784-6ac1-4393-affb-75f88d7e622b-node-bootstrap-token\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.039721 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-config\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.040429 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.041225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.045912 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.053198 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72eec86c-1cfa-4d98-861d-6a745c62cd32-config-volume\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.057157 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.057727 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8c2382-6e5d-49cb-9028-43b797a70879-config-volume\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.058311 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.059182 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.059480 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-certificates\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.063796 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/433b7c07-9a1f-4eb1-a82c-2d822e4af130-srv-cert\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.065433 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-trusted-ca-bundle\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.065774 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad527301-9226-42f3-a4e2-e10fde60c564-audit-dir\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.066195 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-trusted-ca\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.066311 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-images\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.066715 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.067715 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.068500 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.068974 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.070846 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.073351 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-dir\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.077449 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/089da0b2-23b8-431e-9033-255cfbf12d3a-signing-cabundle\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.082494 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-policies\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.083845 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-trusted-ca\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.084472 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.086313 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-serving-cert\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.091285 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.094394 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" event={"ID":"46ac6500-8082-4d0e-9cb2-3c4fc5f44621","Type":"ContainerStarted","Data":"2c41bcad6ef54c1b636aad7cecc46c7f40f9ec2be4122ebbaab7d1054d38c300"} Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.094456 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.095000 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-audit\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.095272 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-tls\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.096534 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.096620 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-console-config\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.097010 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-config\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.097278 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.099569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-serving-cert\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.100191 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-service-ca\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.101010 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.102326 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.103294 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.103517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5ff5a85a-5770-4521-b3a2-8f38e340bcfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ck7jb\" (UID: \"5ff5a85a-5770-4521-b3a2-8f38e340bcfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.111394 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-encryption-config\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.112700 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72eec86c-1cfa-4d98-861d-6a745c62cd32-metrics-tls\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.113249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/360559c7-6d20-4c26-9cfd-3c82af2df553-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnp5m\" (UID: \"360559c7-6d20-4c26-9cfd-3c82af2df553\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.113647 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.113667 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ad527301-9226-42f3-a4e2-e10fde60c564-node-pullsecrets\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.116765 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-oauth-serving-cert\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.117114 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/433b7c07-9a1f-4eb1-a82c-2d822e4af130-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.117172 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/089da0b2-23b8-431e-9033-255cfbf12d3a-signing-key\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.123731 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jjg49" event={"ID":"841392cb-04d4-4f8a-93ba-dc7f3abf589b","Type":"ContainerStarted","Data":"0f1f37836ae4bd799db5d89697ec4c6d6261f30166403a0b51b4fc75a53edc02"} Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.124656 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-etcd-serving-ca\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.126164 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-serving-cert\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.126262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.127130 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8c2382-6e5d-49cb-9028-43b797a70879-secret-volume\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.127258 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.127645 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-proxy-tls\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.127844 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-metrics-tls\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.128503 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ad527301-9226-42f3-a4e2-e10fde60c564-image-import-ca\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.129840 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbj4f\" (UniqueName: \"kubernetes.io/projected/702dc398-86c5-4d2b-bc25-e8464ed36961-kube-api-access-tbj4f\") pod \"downloads-7954f5f757-mpfrd\" (UID: \"702dc398-86c5-4d2b-bc25-e8464ed36961\") " pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.131555 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-oauth-config\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.132451 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.134229 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ad527301-9226-42f3-a4e2-e10fde60c564-etcd-client\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.140495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bea707a-7e50-41ad-bdf5-fd09bb6bd713-cert\") pod \"ingress-canary-x7kmk\" (UID: \"6bea707a-7e50-41ad-bdf5-fd09bb6bd713\") " pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.140970 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-mountpoint-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-plugins-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141044 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhj6m\" (UniqueName: \"kubernetes.io/projected/3e8d9784-6ac1-4393-affb-75f88d7e622b-kube-api-access-fhj6m\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141070 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e8d9784-6ac1-4393-affb-75f88d7e622b-node-bootstrap-token\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141199 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-csi-data-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141232 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftnmh\" (UniqueName: \"kubernetes.io/projected/90e9ba98-83c9-424e-bba8-ddef1a9354cf-kube-api-access-ftnmh\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141265 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e8d9784-6ac1-4393-affb-75f88d7e622b-certs\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141284 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpv2t\" (UniqueName: \"kubernetes.io/projected/6bea707a-7e50-41ad-bdf5-fd09bb6bd713-kube-api-access-mpv2t\") pod \"ingress-canary-x7kmk\" (UID: \"6bea707a-7e50-41ad-bdf5-fd09bb6bd713\") " pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141342 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141382 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-socket-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-registration-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141712 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-csi-data-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.141909 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-registration-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.144664 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7wsqf" event={"ID":"458f9b65-7105-44c8-9322-5045d0087cc0","Type":"ContainerStarted","Data":"049585da2bac6d4711cabaed0e8205553e170cc7c7374f614307520a22b8ee15"} Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.144706 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7wsqf" event={"ID":"458f9b65-7105-44c8-9322-5045d0087cc0","Type":"ContainerStarted","Data":"a8743aed1cbdbe64612c02a50ef1077496e1842376464f7701ed2f4e4d87e756"} Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.145332 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-plugins-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.145399 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-mountpoint-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.145862 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:31.645843916 +0000 UTC m=+143.728430586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.146812 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90e9ba98-83c9-424e-bba8-ddef1a9354cf-socket-dir\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.149947 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-24thm" event={"ID":"f363b84a-05e1-4787-97fe-7b5d24def92d","Type":"ContainerStarted","Data":"6417f0854b5f5b1b91f7a7c2815013aaf603479e102ffddcd23502927a99cb9c"} Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.152435 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e8d9784-6ac1-4393-affb-75f88d7e622b-certs\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.152498 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtzb\" (UniqueName: \"kubernetes.io/projected/72eec86c-1cfa-4d98-861d-6a745c62cd32-kube-api-access-7gtzb\") pod \"dns-default-l2sjb\" (UID: \"72eec86c-1cfa-4d98-861d-6a745c62cd32\") " pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.163478 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvlfv\" (UniqueName: \"kubernetes.io/projected/360559c7-6d20-4c26-9cfd-3c82af2df553-kube-api-access-vvlfv\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnp5m\" (UID: \"360559c7-6d20-4c26-9cfd-3c82af2df553\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.168929 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e8d9784-6ac1-4393-affb-75f88d7e622b-node-bootstrap-token\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.169493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2mz\" (UniqueName: \"kubernetes.io/projected/5ff5a85a-5770-4521-b3a2-8f38e340bcfd-kube-api-access-zj2mz\") pod \"multus-admission-controller-857f4d67dd-ck7jb\" (UID: \"5ff5a85a-5770-4521-b3a2-8f38e340bcfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.170044 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bea707a-7e50-41ad-bdf5-fd09bb6bd713-cert\") pod \"ingress-canary-x7kmk\" (UID: \"6bea707a-7e50-41ad-bdf5-fd09bb6bd713\") " pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.171543 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.171995 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.173431 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qwgff\" (UID: \"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.184352 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.185549 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm6mw\" (UniqueName: \"kubernetes.io/projected/6994c699-1333-48ce-a5cc-62ce628e3497-kube-api-access-lm6mw\") pod \"console-f9d7485db-khn9r\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.192215 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.198404 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vfqd\" (UniqueName: \"kubernetes.io/projected/2cec33dd-5f74-4c4e-b82f-750d0e1d20e7-kube-api-access-6vfqd\") pod \"machine-config-operator-74547568cd-5ckx8\" (UID: \"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: W1124 21:09:31.212630 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b3bb22d_e640_430f_8e3b_4a15d1aa6070.slice/crio-6d67e4526542e4c1d7765f9d07ac390f2bfcb599cd96a80001f9e11dcd4bea75 WatchSource:0}: Error finding container 6d67e4526542e4c1d7765f9d07ac390f2bfcb599cd96a80001f9e11dcd4bea75: Status 404 returned error can't find the container with id 6d67e4526542e4c1d7765f9d07ac390f2bfcb599cd96a80001f9e11dcd4bea75 Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.221824 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.245350 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhdr\" (UniqueName: \"kubernetes.io/projected/cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652-kube-api-access-vfhdr\") pod \"service-ca-operator-777779d784-ssn55\" (UID: \"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.256969 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.258397 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.289083 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.289726 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:31.789697209 +0000 UTC m=+143.872284039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.330609 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.334291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cszwp\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-kube-api-access-cszwp\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.345916 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnjh6\" (UniqueName: \"kubernetes.io/projected/433b7c07-9a1f-4eb1-a82c-2d822e4af130-kube-api-access-lnjh6\") pod \"olm-operator-6b444d44fb-tpspr\" (UID: \"433b7c07-9a1f-4eb1-a82c-2d822e4af130\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.346840 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-bound-sa-token\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.352010 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z26pl\" (UniqueName: \"kubernetes.io/projected/e4a5f850-b2d7-4348-a35b-5153f2a52c6d-kube-api-access-z26pl\") pod \"kube-storage-version-migrator-operator-b67b599dd-mqhbv\" (UID: \"e4a5f850-b2d7-4348-a35b-5153f2a52c6d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.360479 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.364522 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:31.864494735 +0000 UTC m=+143.947081405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.366528 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.374098 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk74s\" (UniqueName: \"kubernetes.io/projected/ad527301-9226-42f3-a4e2-e10fde60c564-kube-api-access-xk74s\") pod \"apiserver-76f77b778f-xsbt9\" (UID: \"ad527301-9226-42f3-a4e2-e10fde60c564\") " pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.379021 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bq9b\" (UniqueName: \"kubernetes.io/projected/ac8c2382-6e5d-49cb-9028-43b797a70879-kube-api-access-6bq9b\") pod \"collect-profiles-29400300-wbg7m\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.397755 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lbrg\" (UniqueName: \"kubernetes.io/projected/12387df0-31c7-4315-b960-ec5ff2e629c6-kube-api-access-6lbrg\") pod \"marketplace-operator-79b997595-kl9g5\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.404185 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.411125 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.413103 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.413209 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.440984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9jxl\" (UniqueName: \"kubernetes.io/projected/089da0b2-23b8-431e-9033-255cfbf12d3a-kube-api-access-q9jxl\") pod \"service-ca-9c57cc56f-wtj6n\" (UID: \"089da0b2-23b8-431e-9033-255cfbf12d3a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.452999 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p42zd"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.454173 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sp8cw"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.454555 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkgth\" (UniqueName: \"kubernetes.io/projected/b4e7907d-1917-4617-b922-ca338cab51b3-kube-api-access-wkgth\") pod \"migrator-59844c95c7-2njc8\" (UID: \"b4e7907d-1917-4617-b922-ca338cab51b3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.456004 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jd7z\" (UniqueName: \"kubernetes.io/projected/36ef90b1-bfc1-4cdc-9bf3-611c1f94058a-kube-api-access-5jd7z\") pod \"ingress-operator-5b745b69d9-8qqbn\" (UID: \"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.456254 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.479908 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.480075 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.480555 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:31.980529858 +0000 UTC m=+144.063116528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.493109 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wd6s2"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.499827 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.504202 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48n8b\" (UniqueName: \"kubernetes.io/projected/38662694-befb-4e2c-9a82-e0bc5ae846db-kube-api-access-48n8b\") pod \"oauth-openshift-558db77b4-4jjjc\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.507391 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftnmh\" (UniqueName: \"kubernetes.io/projected/90e9ba98-83c9-424e-bba8-ddef1a9354cf-kube-api-access-ftnmh\") pod \"csi-hostpathplugin-xc9p8\" (UID: \"90e9ba98-83c9-424e-bba8-ddef1a9354cf\") " pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.521850 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.528144 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.561168 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhj6m\" (UniqueName: \"kubernetes.io/projected/3e8d9784-6ac1-4393-affb-75f88d7e622b-kube-api-access-fhj6m\") pod \"machine-config-server-rz5j7\" (UID: \"3e8d9784-6ac1-4393-affb-75f88d7e622b\") " pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.584964 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.586421 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.586803 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.086782751 +0000 UTC m=+144.169369421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.600622 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rz5j7" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.614004 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpv2t\" (UniqueName: \"kubernetes.io/projected/6bea707a-7e50-41ad-bdf5-fd09bb6bd713-kube-api-access-mpv2t\") pod \"ingress-canary-x7kmk\" (UID: \"6bea707a-7e50-41ad-bdf5-fd09bb6bd713\") " pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.618611 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.619606 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.666011 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.666345 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.668251 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.668658 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gt2ln"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.675697 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.680008 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.694110 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.694777 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.194701961 +0000 UTC m=+144.277288631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.729151 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.798171 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.798555 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.298542719 +0000 UTC m=+144.381129389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.806715 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7wsqf" podStartSLOduration=122.806696943 podStartE2EDuration="2m2.806696943s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:31.806492946 +0000 UTC m=+143.889079606" watchObservedRunningTime="2025-11-24 21:09:31.806696943 +0000 UTC m=+143.889283603" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.889088 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x7kmk" Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.906228 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:31 crc kubenswrapper[4801]: E1124 21:09:31.906704 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.406684057 +0000 UTC m=+144.489270727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.930497 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb"] Nov 24 21:09:31 crc kubenswrapper[4801]: I1124 21:09:31.993575 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" podStartSLOduration=122.993560914 podStartE2EDuration="2m2.993560914s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:31.99169716 +0000 UTC m=+144.074283830" watchObservedRunningTime="2025-11-24 21:09:31.993560914 +0000 UTC m=+144.076147584" Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.008835 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.009270 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.509250442 +0000 UTC m=+144.591837112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.109376 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.109751 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.609722792 +0000 UTC m=+144.692309462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.110132 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.110498 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.610489098 +0000 UTC m=+144.693075768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.187289 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" event={"ID":"ca285374-4829-4b65-835c-df5877019e4c","Type":"ContainerStarted","Data":"1e21ba74894ef2470a07031ce71760d92201574097d362ab9df14fe388b809f0"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.196904 4801 generic.go:334] "Generic (PLEG): container finished" podID="66361a77-fd52-4c44-bc62-9df560348e1b" containerID="6b6990935fc5818a115485e67aa72891db43594cc6702038d6a251704d45fa47" exitCode=0 Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.197001 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" event={"ID":"66361a77-fd52-4c44-bc62-9df560348e1b","Type":"ContainerDied","Data":"6b6990935fc5818a115485e67aa72891db43594cc6702038d6a251704d45fa47"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.197047 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" event={"ID":"66361a77-fd52-4c44-bc62-9df560348e1b","Type":"ContainerStarted","Data":"d41256e1c999d7022f1fa63549258cad0d2f2a4d4a534b049e8ea87977a76c97"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.207325 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" event={"ID":"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0","Type":"ContainerStarted","Data":"35c4655c32520d90884e79e93e32400f363b9dee2083f660137859bae39cb1f1"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.212136 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.212771 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.712755432 +0000 UTC m=+144.795342092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.216478 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jjg49" event={"ID":"841392cb-04d4-4f8a-93ba-dc7f3abf589b","Type":"ContainerStarted","Data":"451382dbc2f46ea9c26c5d32412c1deafd4d9ff2963dbedd60695270ad68e4d9"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.216884 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.228978 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" event={"ID":"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c","Type":"ContainerStarted","Data":"864ad4654fbd62c1e9df9ea278af8fff41dfd86675e53f122971a71b69b17a53"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.234673 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" event={"ID":"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7","Type":"ContainerStarted","Data":"53b3b5d09244580acbc54325d66cf9bceb56c6f5ea8bdd4603bcfc432fdadd1c"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.235881 4801 patch_prober.go:28] interesting pod/console-operator-58897d9998-jjg49 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.235912 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jjg49" podUID="841392cb-04d4-4f8a-93ba-dc7f3abf589b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.286119 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" event={"ID":"46ac6500-8082-4d0e-9cb2-3c4fc5f44621","Type":"ContainerStarted","Data":"e6c07d1e9d25a73cac4a4eceba0735f9142b5be376628deb00797eabff02011d"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.304725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rz5j7" event={"ID":"3e8d9784-6ac1-4393-affb-75f88d7e622b","Type":"ContainerStarted","Data":"8121c603d3231c4c9cf29f76e91b3312245bb36d51778c5db49a066d07f2cc80"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.324231 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.353685 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.853663322 +0000 UTC m=+144.936249992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.394933 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lxmnm" podStartSLOduration=124.394897429 podStartE2EDuration="2m4.394897429s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:32.366267571 +0000 UTC m=+144.448854241" watchObservedRunningTime="2025-11-24 21:09:32.394897429 +0000 UTC m=+144.477484089" Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.404849 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" event={"ID":"6e8b3d74-50eb-4b81-a035-fa24854747ab","Type":"ContainerStarted","Data":"0171e7fc8d585dab6b5dba6802c0964b77380084f7425b83849b71a832ee24fd"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.410934 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:32 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:32 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:32 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.411019 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.425667 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-24thm" event={"ID":"f363b84a-05e1-4787-97fe-7b5d24def92d","Type":"ContainerStarted","Data":"0477792086fe2ca5117758e6af6ad235110a51c5b240d6f8ba17d021094299d2"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.434697 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd"] Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.456307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" event={"ID":"4b3bb22d-e640-430f-8e3b-4a15d1aa6070","Type":"ContainerStarted","Data":"52dd9459816097b302c7f30bdb9ec3d7e74bf2c7ecaea7dfc1c3c062be0eb33e"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.456384 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" event={"ID":"4b3bb22d-e640-430f-8e3b-4a15d1aa6070","Type":"ContainerStarted","Data":"6d67e4526542e4c1d7765f9d07ac390f2bfcb599cd96a80001f9e11dcd4bea75"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.457591 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.458144 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:32.958122491 +0000 UTC m=+145.040709161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.485981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" event={"ID":"e62f3661-a867-4120-8ee5-79f7c76cedaa","Type":"ContainerStarted","Data":"01fb51a9ac59c5fae145048e331ebeeb9b4cdab382bc0c162a148aae0e4955ee"} Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.496734 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf"] Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.567261 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.567781 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.067755602 +0000 UTC m=+145.150342272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.588487 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl"] Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.607587 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4"] Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.631441 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ck7jb"] Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.668118 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.668485 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.168469161 +0000 UTC m=+145.251055831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.769505 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.769903 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.269892175 +0000 UTC m=+145.352478845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.871313 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" podStartSLOduration=123.871292699 podStartE2EDuration="2m3.871292699s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:32.867701503 +0000 UTC m=+144.950288163" watchObservedRunningTime="2025-11-24 21:09:32.871292699 +0000 UTC m=+144.953879369" Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.874340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.875180 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.375137402 +0000 UTC m=+145.457724162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:32 crc kubenswrapper[4801]: W1124 21:09:32.971442 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1368b67f_4d82_435c_8afd_b1c16727f118.slice/crio-88d7df1c32401824ce16aea2fbdfb7c4dcbc54de522f5d91eb9a858f6e226236 WatchSource:0}: Error finding container 88d7df1c32401824ce16aea2fbdfb7c4dcbc54de522f5d91eb9a858f6e226236: Status 404 returned error can't find the container with id 88d7df1c32401824ce16aea2fbdfb7c4dcbc54de522f5d91eb9a858f6e226236 Nov 24 21:09:32 crc kubenswrapper[4801]: I1124 21:09:32.979926 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:32 crc kubenswrapper[4801]: E1124 21:09:32.980282 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.480270526 +0000 UTC m=+145.562857196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.078913 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jjg49" podStartSLOduration=124.078880642 podStartE2EDuration="2m4.078880642s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:33.076220149 +0000 UTC m=+145.158806819" watchObservedRunningTime="2025-11-24 21:09:33.078880642 +0000 UTC m=+145.161467312" Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.090925 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.100668 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.60063471 +0000 UTC m=+145.683221380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.101206 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.101730 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.601716297 +0000 UTC m=+145.684302967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.127381 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.132946 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mpfrd"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.138993 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.146052 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l2sjb"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.147257 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-drpf8" podStartSLOduration=124.147226593 podStartE2EDuration="2m4.147226593s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:33.110534274 +0000 UTC m=+145.193120944" watchObservedRunningTime="2025-11-24 21:09:33.147226593 +0000 UTC m=+145.229813263" Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.151840 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-khn9r"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.201952 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.202305 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.702289181 +0000 UTC m=+145.784875851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.268864 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.296274 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wtj6n"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.303061 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.303476 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.803463607 +0000 UTC m=+145.886050277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.345490 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.351650 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kl9g5"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.353981 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.355631 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ssn55"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.371577 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc9p8"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.406035 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.406203 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.906173696 +0000 UTC m=+145.988760366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.406446 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.406781 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:33.906763726 +0000 UTC m=+145.989350396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.408809 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:33 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:33 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:33 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.408871 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:33 crc kubenswrapper[4801]: W1124 21:09:33.422238 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089da0b2_23b8_431e_9033_255cfbf12d3a.slice/crio-88fdc6d8c5907cf9455296b1b56855552ae910bab53670549037b37465d693a9 WatchSource:0}: Error finding container 88fdc6d8c5907cf9455296b1b56855552ae910bab53670549037b37465d693a9: Status 404 returned error can't find the container with id 88fdc6d8c5907cf9455296b1b56855552ae910bab53670549037b37465d693a9 Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.438736 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x7kmk"] Nov 24 21:09:33 crc kubenswrapper[4801]: W1124 21:09:33.439094 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbfa6c60_9b08_4bf0_af9b_93a6d0fc5652.slice/crio-66dca5e032fe35563e93f984d85b40fd7682b4afc7c79a342db4c462b7f83fb9 WatchSource:0}: Error finding container 66dca5e032fe35563e93f984d85b40fd7682b4afc7c79a342db4c462b7f83fb9: Status 404 returned error can't find the container with id 66dca5e032fe35563e93f984d85b40fd7682b4afc7c79a342db4c462b7f83fb9 Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.499345 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khn9r" event={"ID":"6994c699-1333-48ce-a5cc-62ce628e3497","Type":"ContainerStarted","Data":"1398945dcc809207baf8aec6721665e1733d3d4518ce1a06b0dc14f5fc014aa5"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.503017 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" event={"ID":"dac2b280-3c5e-43ff-9b5e-b46040ca4904","Type":"ContainerStarted","Data":"eb8332b2364360cc751379ca79d47e5866e3bbbfcc7ed451f2dc2d3262698d4d"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.506790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" event={"ID":"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652","Type":"ContainerStarted","Data":"66dca5e032fe35563e93f984d85b40fd7682b4afc7c79a342db4c462b7f83fb9"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.507461 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.507877 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.007863549 +0000 UTC m=+146.090450219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.509140 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.515479 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" event={"ID":"5ff5a85a-5770-4521-b3a2-8f38e340bcfd","Type":"ContainerStarted","Data":"d0f00757ab7f650156448cdace5a73475d5301b7c20ce23bb344053d8cf7b9a3"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.519352 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" event={"ID":"ca285374-4829-4b65-835c-df5877019e4c","Type":"ContainerStarted","Data":"30043b2b47bbef457f620b9b0db2987817e14ae6a38264d300d298ac390587b8"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.521117 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" event={"ID":"6e8b3d74-50eb-4b81-a035-fa24854747ab","Type":"ContainerStarted","Data":"773fe82253a3b2a4178b61ffc3646f66c4acbc268af21f0a5c599ede9df00e89"} Nov 24 21:09:33 crc kubenswrapper[4801]: W1124 21:09:33.533354 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90e9ba98_83c9_424e_bba8_ddef1a9354cf.slice/crio-74a04158b089fcf05c5623ceb05b449e5462c85e3a8f827c0e8c9128042c29c2 WatchSource:0}: Error finding container 74a04158b089fcf05c5623ceb05b449e5462c85e3a8f827c0e8c9128042c29c2: Status 404 returned error can't find the container with id 74a04158b089fcf05c5623ceb05b449e5462c85e3a8f827c0e8c9128042c29c2 Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.533764 4801 generic.go:334] "Generic (PLEG): container finished" podID="cdb2d2a4-a011-4a8d-988a-d24b129ab9f0" containerID="47989a53674e87531c6b516a8158c13ed1c20b037585d78d5c4957450ec85fdc" exitCode=0 Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.534204 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" event={"ID":"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0","Type":"ContainerDied","Data":"47989a53674e87531c6b516a8158c13ed1c20b037585d78d5c4957450ec85fdc"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.542553 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" event={"ID":"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c","Type":"ContainerStarted","Data":"29935a903bb6c4eb164117f7a8c114ed81139fea55d1623567830391d5d45a6d"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.544305 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mpfrd" event={"ID":"702dc398-86c5-4d2b-bc25-e8464ed36961","Type":"ContainerStarted","Data":"ef1ad0182674a84d45cccb0f20bd79603f33241aceb76d2bd576a74648668884"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.560707 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wd6s2" podStartSLOduration=125.56069311 podStartE2EDuration="2m5.56069311s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:33.558296946 +0000 UTC m=+145.640883616" watchObservedRunningTime="2025-11-24 21:09:33.56069311 +0000 UTC m=+145.643279780" Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.589732 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l2sjb" event={"ID":"72eec86c-1cfa-4d98-861d-6a745c62cd32","Type":"ContainerStarted","Data":"e1f2c8db52b92d8bf598ea326f7ae7f0f833e6da50ec6f4fde46e041b36392ad"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.593103 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" event={"ID":"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c","Type":"ContainerStarted","Data":"1268e01165b9c0ff7b3502023b0f65644e1a010200814f06f37b0f3b557a0b34"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.595106 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" event={"ID":"9b7b0134-906e-4ab8-8535-8f33f6879cb8","Type":"ContainerStarted","Data":"6833873c79e93a97a3ace10adf4f35a90a7a98568513fd8a452c04747be5c385"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.608448 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.608739 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.108724523 +0000 UTC m=+146.191311193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.622201 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" event={"ID":"1368b67f-4d82-435c-8afd-b1c16727f118","Type":"ContainerStarted","Data":"88d7df1c32401824ce16aea2fbdfb7c4dcbc54de522f5d91eb9a858f6e226236"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.625201 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" event={"ID":"360559c7-6d20-4c26-9cfd-3c82af2df553","Type":"ContainerStarted","Data":"f7f31ba701cd2ca5195a4fb0af943ac58d8d3b41e30dfd6f57ebf8a89801844b"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.626676 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" event={"ID":"1e6833ad-116b-4620-a27b-59271899cf0c","Type":"ContainerStarted","Data":"16b00e70f3e810729b4a6dcf77eb0cc31ec30f3daaf00822310de34bcd8f4052"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.629562 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" event={"ID":"12387df0-31c7-4315-b960-ec5ff2e629c6","Type":"ContainerStarted","Data":"bb124cafd7460f3907f24e86fa23cc6bed47f407afe74e899706d434d00fd5dc"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.647842 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mfjxs" podStartSLOduration=124.647809395 podStartE2EDuration="2m4.647809395s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:33.609515961 +0000 UTC m=+145.692102621" watchObservedRunningTime="2025-11-24 21:09:33.647809395 +0000 UTC m=+145.730396055" Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.648181 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.649498 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" event={"ID":"cd837d07-9682-448e-85c3-f29f598b9441","Type":"ContainerStarted","Data":"9e83d2fcc8a8d6bb158860c0b0918e3c322b778ecf280b4dcfe7ac97b0078b00"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.652167 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" event={"ID":"089da0b2-23b8-431e-9033-255cfbf12d3a","Type":"ContainerStarted","Data":"88fdc6d8c5907cf9455296b1b56855552ae910bab53670549037b37465d693a9"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.653184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" event={"ID":"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7","Type":"ContainerStarted","Data":"680898381eb2795c95578631c12b972f02dbfff4f87c3af469e6ab5e13cfa44f"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.673183 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" event={"ID":"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8","Type":"ContainerStarted","Data":"9b2c82aaaeba6e6464973d922a26ed6b8e5bada52d581364cd61673b5db0b2b9"} Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.680159 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.690518 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4jjjc"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.693737 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xsbt9"] Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.709548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.709757 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.209729563 +0000 UTC m=+146.292316233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.709811 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.710789 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.210758999 +0000 UTC m=+146.293345669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.811289 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.813185 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.313167537 +0000 UTC m=+146.395754207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:33 crc kubenswrapper[4801]: W1124 21:09:33.833426 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38662694_befb_4e2c_9a82_e0bc5ae846db.slice/crio-8b8607e737c66b14ce1b86a86ac074e578aa119d27aaafb9c227a79691417729 WatchSource:0}: Error finding container 8b8607e737c66b14ce1b86a86ac074e578aa119d27aaafb9c227a79691417729: Status 404 returned error can't find the container with id 8b8607e737c66b14ce1b86a86ac074e578aa119d27aaafb9c227a79691417729 Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.870057 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jjg49" Nov 24 21:09:33 crc kubenswrapper[4801]: I1124 21:09:33.948735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:33 crc kubenswrapper[4801]: E1124 21:09:33.949028 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.449017251 +0000 UTC m=+146.531603921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.049577 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.049751 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.54972911 +0000 UTC m=+146.632315780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.050354 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.050935 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.550911411 +0000 UTC m=+146.633498071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.151435 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.151945 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.651928881 +0000 UTC m=+146.734515551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.253329 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.253798 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.75378604 +0000 UTC m=+146.836372710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.355048 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.355304 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.855258066 +0000 UTC m=+146.937844786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.355544 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.356112 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.856087264 +0000 UTC m=+146.938673934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.408767 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:34 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:34 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:34 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.408823 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.456673 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.456866 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:34.956832525 +0000 UTC m=+147.039419195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.558880 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.559630 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.059615997 +0000 UTC m=+147.142202667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.659798 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.659955 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.159933112 +0000 UTC m=+147.242519782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.660902 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.661508 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.161482396 +0000 UTC m=+147.244069066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.680430 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" event={"ID":"5ff5a85a-5770-4521-b3a2-8f38e340bcfd","Type":"ContainerStarted","Data":"4363320f550c6f04d0919e4eed90d52c487622bc70711dac2cfa7d0d9141ef96"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.685490 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" event={"ID":"dac2b280-3c5e-43ff-9b5e-b46040ca4904","Type":"ContainerStarted","Data":"148d29d2c04615d09ba57cbe7b87fda1cf56f3eeb29a6db6a2ae27566b3dd7f3"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.687113 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" event={"ID":"1e6833ad-116b-4620-a27b-59271899cf0c","Type":"ContainerStarted","Data":"f04bba26c9aad2b51cf8901e8bc969c97ec0c5994f800e03b2b35c88b1d0354b"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.687939 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" event={"ID":"ac8c2382-6e5d-49cb-9028-43b797a70879","Type":"ContainerStarted","Data":"cd7b077dd95438090804c8bc3ace14184dd5787b17e77c8396ec4516e6fe870c"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.690377 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" event={"ID":"1368b67f-4d82-435c-8afd-b1c16727f118","Type":"ContainerStarted","Data":"b0ab9beb988fe6e69c499b33faeee51fda9795c813c392aa490bc4842fc717aa"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.691840 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" event={"ID":"ad527301-9226-42f3-a4e2-e10fde60c564","Type":"ContainerStarted","Data":"68f925de50a4039a258efe27bb2ed96bd952726b343fde40a6631ec2c396fd6c"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.693614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x7kmk" event={"ID":"6bea707a-7e50-41ad-bdf5-fd09bb6bd713","Type":"ContainerStarted","Data":"65bbdead4133e90ad5b441eb04d4a94b9e23c6ca0f747105f508426237954910"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.695155 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rz5j7" event={"ID":"3e8d9784-6ac1-4393-affb-75f88d7e622b","Type":"ContainerStarted","Data":"144247d971651fe8bc1032abe027fdfb1bb239e76c0c857a69f928a108921f9b"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.696737 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" event={"ID":"b4e7907d-1917-4617-b922-ca338cab51b3","Type":"ContainerStarted","Data":"7931229ba5bc2595cc05751600b27c3b00275e6729cfb4d122c6cf9f9ba73ab6"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.698621 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" event={"ID":"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7","Type":"ContainerStarted","Data":"4e7e19b5c4424d1978a2f328b247fbcf406bdf8b8194cd5e2d012f1bf74bf72a"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.701119 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-24thm" event={"ID":"f363b84a-05e1-4787-97fe-7b5d24def92d","Type":"ContainerStarted","Data":"051ad9936d50799f21b1b2671fc8ec52a2ab88cb6c21fc37ad9c159008413037"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.702615 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" event={"ID":"e62f3661-a867-4120-8ee5-79f7c76cedaa","Type":"ContainerStarted","Data":"553f9413ba48c4d86f18927245fb6aedf32c7f63bd469f85bc8b1eee3c57e5c0"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.704259 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" event={"ID":"d88dd75a-98e7-4ee4-8bd7-dc53f3553a2c","Type":"ContainerStarted","Data":"92e9a0209725d5fd8ab02ecbe577503fbd93ee7398c78a887840e52e108f3467"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.705844 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mpfrd" event={"ID":"702dc398-86c5-4d2b-bc25-e8464ed36961","Type":"ContainerStarted","Data":"fe8cb23be43c60f0e70e9dc088b8ffac6b4578bf5513909e6389f8d1bbd03ebb"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.706945 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" event={"ID":"90e9ba98-83c9-424e-bba8-ddef1a9354cf","Type":"ContainerStarted","Data":"74a04158b089fcf05c5623ceb05b449e5462c85e3a8f827c0e8c9128042c29c2"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.708219 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" event={"ID":"360559c7-6d20-4c26-9cfd-3c82af2df553","Type":"ContainerStarted","Data":"d2625007c9c6d4d5922c2882d7fbe38c6e693640cb08ae28c1a7b500619de757"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.711064 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" event={"ID":"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a","Type":"ContainerStarted","Data":"485b9acfeb4650587bbf7a92895c942110221fe85fd26f721323fd7138763967"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.712916 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" event={"ID":"433b7c07-9a1f-4eb1-a82c-2d822e4af130","Type":"ContainerStarted","Data":"ae601b2fdbc68c9bd4d33e60c51f592794b4cf61b11b33483bec99a86f4fbc57"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.716711 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" event={"ID":"66361a77-fd52-4c44-bc62-9df560348e1b","Type":"ContainerStarted","Data":"f4e6efee0737dd47c9b4c483973c141010d4a44eb1943e94f5ec6d8069d0eab7"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.718455 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" event={"ID":"c197a84b-67bf-4e6b-9a0f-d3c1b03c3de7","Type":"ContainerStarted","Data":"c3792ee991e0a1c2935c90ba53117ca9917070aed12dff6318cdc414dc65ffb4"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.720008 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l2sjb" event={"ID":"72eec86c-1cfa-4d98-861d-6a745c62cd32","Type":"ContainerStarted","Data":"5ccc45c4a353de8173ffd6d53f09f4109f884794f63a747549d1590aec544da0"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.721196 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" event={"ID":"e4a5f850-b2d7-4348-a35b-5153f2a52c6d","Type":"ContainerStarted","Data":"65b4945b0e61899e31d9766ce2ce47fb0a246d5525897427fc0078d8f146eac3"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.722988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" event={"ID":"1d18ec0f-c3fb-43dc-99a7-e896cf2789a8","Type":"ContainerStarted","Data":"4401a64778db8b583127fbe143fe6865ebb01df9449a6276459892a46dae5863"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.724762 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" event={"ID":"46ac6500-8082-4d0e-9cb2-3c4fc5f44621","Type":"ContainerStarted","Data":"c35cb5e174242f43a506b74836e7bc67fbbcd0c8c99c25d5c046d298ac1eba0e"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.727245 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" event={"ID":"f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c","Type":"ContainerStarted","Data":"2ef8c53801ae5d26e26f45cd8d06db78d60963a7899da6b10b579099663fc600"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.732267 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khn9r" event={"ID":"6994c699-1333-48ce-a5cc-62ce628e3497","Type":"ContainerStarted","Data":"86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.734502 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" event={"ID":"cd837d07-9682-448e-85c3-f29f598b9441","Type":"ContainerStarted","Data":"1cc4f2c4c8e396f997efb0b648d26d08cc314b5f749ffe975d87ec1b3650eb3f"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.739852 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gwwj9" podStartSLOduration=126.739825786 podStartE2EDuration="2m6.739825786s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:34.739740683 +0000 UTC m=+146.822327353" watchObservedRunningTime="2025-11-24 21:09:34.739825786 +0000 UTC m=+146.822412456" Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.741328 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" event={"ID":"38662694-befb-4e2c-9a82-e0bc5ae846db","Type":"ContainerStarted","Data":"8b8607e737c66b14ce1b86a86ac074e578aa119d27aaafb9c227a79691417729"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.748894 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" event={"ID":"9b7b0134-906e-4ab8-8535-8f33f6879cb8","Type":"ContainerStarted","Data":"c642c569b2c42cfe85ef0c21d678f7c8fb26354a8a1eb4141de67d639573ea81"} Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.760770 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sp8cw" podStartSLOduration=125.760754415 podStartE2EDuration="2m5.760754415s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:34.759635816 +0000 UTC m=+146.842222486" watchObservedRunningTime="2025-11-24 21:09:34.760754415 +0000 UTC m=+146.843341085" Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.761897 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.765181 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.265164499 +0000 UTC m=+147.347751169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.863975 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.864408 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.364390677 +0000 UTC m=+147.446977347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:34 crc kubenswrapper[4801]: I1124 21:09:34.965974 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:34 crc kubenswrapper[4801]: E1124 21:09:34.966921 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.466900988 +0000 UTC m=+147.549487658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.071352 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.071872 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.571846505 +0000 UTC m=+147.654433175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.174281 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.174716 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.674692379 +0000 UTC m=+147.757279049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.276511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.276890 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.776877189 +0000 UTC m=+147.859463859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.377719 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.377882 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.877859097 +0000 UTC m=+147.960445777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.378027 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.378569 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.878558142 +0000 UTC m=+147.961144822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.410721 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:35 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:35 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:35 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.411350 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.479668 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.480137 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:35.98010154 +0000 UTC m=+148.062688230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.581715 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.582172 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.082152746 +0000 UTC m=+148.164739426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.682651 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.682821 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.182794412 +0000 UTC m=+148.265381082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.682957 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.683316 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.18330328 +0000 UTC m=+148.265889950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.755759 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" event={"ID":"12387df0-31c7-4315-b960-ec5ff2e629c6","Type":"ContainerStarted","Data":"823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.756687 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.758275 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" event={"ID":"089da0b2-23b8-431e-9033-255cfbf12d3a","Type":"ContainerStarted","Data":"9e60d9d0a6e9b63ea12e04e90bc5254178b4cbc67ac0193aa89a3bdccfd68950"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.758913 4801 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kl9g5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.758943 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.759971 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" event={"ID":"38662694-befb-4e2c-9a82-e0bc5ae846db","Type":"ContainerStarted","Data":"03887479b476a81195b536b9bf79c4901e63ac4e7012dd2741638f5cb28c2f9d"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.760499 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.761866 4801 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4jjjc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.761905 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.763158 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" event={"ID":"2cec33dd-5f74-4c4e-b82f-750d0e1d20e7","Type":"ContainerStarted","Data":"e3f9a3791ed67ecad15a56541621b2a509f85d00282d1ceca46422f9a9198068"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.764293 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" event={"ID":"ac8c2382-6e5d-49cb-9028-43b797a70879","Type":"ContainerStarted","Data":"bce23eb994a8eaf6aef36947cba09e5e22f74471367ad73df05113d1b6c0c7db"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.765701 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x7kmk" event={"ID":"6bea707a-7e50-41ad-bdf5-fd09bb6bd713","Type":"ContainerStarted","Data":"e3fb510931f42b592f9df3c73763a3300f7eeceff7a2c354644d7d8034a9e230"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.767186 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" event={"ID":"1e6833ad-116b-4620-a27b-59271899cf0c","Type":"ContainerStarted","Data":"9d85c843b90d9fb2c1a612a231e74a761bf805d60d3547861afaa21faad830d3"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.767574 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.768710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" event={"ID":"433b7c07-9a1f-4eb1-a82c-2d822e4af130","Type":"ContainerStarted","Data":"54bbad2aab8000141ffe76858327eb03628e959add1b40fc492963fc9f113ca3"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.769288 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.770475 4801 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tpspr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.770513 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" podUID="433b7c07-9a1f-4eb1-a82c-2d822e4af130" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.771092 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" event={"ID":"1368b67f-4d82-435c-8afd-b1c16727f118","Type":"ContainerStarted","Data":"f6a7087ab751d347383f5248e819a8fae59481eec5f655925f82e76ae4d68407"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.772867 4801 generic.go:334] "Generic (PLEG): container finished" podID="ad527301-9226-42f3-a4e2-e10fde60c564" containerID="6de0e56f3f067da8dc3c2afc7cc30b25d57136d0533183d1d451865ac3671001" exitCode=0 Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.772910 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" event={"ID":"ad527301-9226-42f3-a4e2-e10fde60c564","Type":"ContainerDied","Data":"6de0e56f3f067da8dc3c2afc7cc30b25d57136d0533183d1d451865ac3671001"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.776454 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" event={"ID":"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a","Type":"ContainerStarted","Data":"5a50fff8639f0fe2762581fc0d7c2a4f420fbf965651886326be693bea5c10b4"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.776496 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" event={"ID":"36ef90b1-bfc1-4cdc-9bf3-611c1f94058a","Type":"ContainerStarted","Data":"66d499ab0dd6bf0999efc4efaba7b4cde4c526993fa45b3d8c570fe4286b4f97"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.777739 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" event={"ID":"9b7b0134-906e-4ab8-8535-8f33f6879cb8","Type":"ContainerStarted","Data":"42137dd96f4d39878953c0767647590cdeef08d3e41d30183925a13542de6db4"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.784138 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.784631 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.28460495 +0000 UTC m=+148.367191610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.785337 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" event={"ID":"cdb2d2a4-a011-4a8d-988a-d24b129ab9f0","Type":"ContainerStarted","Data":"034165fa47ebfcdc85def4c76859debbe10e6b884ab133a075dd9494f72cb846"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.785479 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.788652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" event={"ID":"b4e7907d-1917-4617-b922-ca338cab51b3","Type":"ContainerStarted","Data":"75c1625747bc39f89d35a610e4ea57dad294699d0f409d33a4d88054a7256f72"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.788707 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" event={"ID":"b4e7907d-1917-4617-b922-ca338cab51b3","Type":"ContainerStarted","Data":"f4bd318776e07085981108e9d980b8cf3f6321ba0166e4739f671daaa161b7f4"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.790098 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" event={"ID":"cbfa6c60-9b08-4bf0-af9b-93a6d0fc5652","Type":"ContainerStarted","Data":"3fc638379a273735d2d6e341e0a3ded9906e14317e47305e0a2129f2fd03537f"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.792166 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l2sjb" event={"ID":"72eec86c-1cfa-4d98-861d-6a745c62cd32","Type":"ContainerStarted","Data":"cb8e3007f82edff8645f175f735bed59b136b5f99cc7546d14ff01057e218f20"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.793405 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" event={"ID":"5ff5a85a-5770-4521-b3a2-8f38e340bcfd","Type":"ContainerStarted","Data":"945009d22910df1a81711ad5c090272ac3053e82b3c8fbeca6099d405ab3620a"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.795223 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" event={"ID":"e4a5f850-b2d7-4348-a35b-5153f2a52c6d","Type":"ContainerStarted","Data":"3ad5d55a7874a4400c1434e24651fc902c0a78acc7f25dbbdfe70b727d5616a0"} Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.797981 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.798010 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.799473 4801 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j2gwl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.799504 4801 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8r2wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.799573 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" podUID="dac2b280-3c5e-43ff-9b5e-b46040ca4904" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.799511 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" podUID="1d18ec0f-c3fb-43dc-99a7-e896cf2789a8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.808019 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" podStartSLOduration=126.808000626 podStartE2EDuration="2m6.808000626s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.804728812 +0000 UTC m=+147.887315482" watchObservedRunningTime="2025-11-24 21:09:35.808000626 +0000 UTC m=+147.890587296" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.838583 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" podStartSLOduration=127.838562201 podStartE2EDuration="2m7.838562201s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.837343678 +0000 UTC m=+147.919930348" watchObservedRunningTime="2025-11-24 21:09:35.838562201 +0000 UTC m=+147.921148871" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.871522 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" podStartSLOduration=126.871504708 podStartE2EDuration="2m6.871504708s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.869558941 +0000 UTC m=+147.952145611" watchObservedRunningTime="2025-11-24 21:09:35.871504708 +0000 UTC m=+147.954091378" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.885434 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.888195 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.388180739 +0000 UTC m=+148.470767399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.905825 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" podStartSLOduration=127.905803844 podStartE2EDuration="2m7.905803844s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.90454814 +0000 UTC m=+147.987134810" watchObservedRunningTime="2025-11-24 21:09:35.905803844 +0000 UTC m=+147.988390514" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.927438 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" podStartSLOduration=126.927421727 podStartE2EDuration="2m6.927421727s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.926145193 +0000 UTC m=+148.008731863" watchObservedRunningTime="2025-11-24 21:09:35.927421727 +0000 UTC m=+148.010008397" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.940395 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mqhbv" podStartSLOduration=126.940384658 podStartE2EDuration="2m6.940384658s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.940087219 +0000 UTC m=+148.022673889" watchObservedRunningTime="2025-11-24 21:09:35.940384658 +0000 UTC m=+148.022971328" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.958281 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z5rv4" podStartSLOduration=126.958261591 podStartE2EDuration="2m6.958261591s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.956224141 +0000 UTC m=+148.038810811" watchObservedRunningTime="2025-11-24 21:09:35.958261591 +0000 UTC m=+148.040848261" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.976151 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" podStartSLOduration=126.976133004 podStartE2EDuration="2m6.976133004s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.97429407 +0000 UTC m=+148.056880740" watchObservedRunningTime="2025-11-24 21:09:35.976133004 +0000 UTC m=+148.058719674" Nov 24 21:09:35 crc kubenswrapper[4801]: I1124 21:09:35.986902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:35 crc kubenswrapper[4801]: E1124 21:09:35.988337 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.488321658 +0000 UTC m=+148.570908328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.006913 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-24thm" podStartSLOduration=127.006890396 podStartE2EDuration="2m7.006890396s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:35.993724727 +0000 UTC m=+148.076311387" watchObservedRunningTime="2025-11-24 21:09:36.006890396 +0000 UTC m=+148.089477066" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.031792 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnp5m" podStartSLOduration=127.031757042 podStartE2EDuration="2m7.031757042s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.027860196 +0000 UTC m=+148.110446866" watchObservedRunningTime="2025-11-24 21:09:36.031757042 +0000 UTC m=+148.114343712" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.051975 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qwgff" podStartSLOduration=127.051949876 podStartE2EDuration="2m7.051949876s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.050872108 +0000 UTC m=+148.133458778" watchObservedRunningTime="2025-11-24 21:09:36.051949876 +0000 UTC m=+148.134536546" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.072297 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" podStartSLOduration=127.072280734 podStartE2EDuration="2m7.072280734s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.070791223 +0000 UTC m=+148.153377893" watchObservedRunningTime="2025-11-24 21:09:36.072280734 +0000 UTC m=+148.154867404" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.090190 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.090740 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.590722457 +0000 UTC m=+148.673309117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.101390 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ckx8" podStartSLOduration=127.101327246 podStartE2EDuration="2m7.101327246s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.092448677 +0000 UTC m=+148.175035347" watchObservedRunningTime="2025-11-24 21:09:36.101327246 +0000 UTC m=+148.183913916" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.113091 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ssn55" podStartSLOduration=127.113062316 podStartE2EDuration="2m7.113062316s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.112506286 +0000 UTC m=+148.195092956" watchObservedRunningTime="2025-11-24 21:09:36.113062316 +0000 UTC m=+148.195648986" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.156261 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gt2ln" podStartSLOduration=127.156231619 podStartE2EDuration="2m7.156231619s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.134406209 +0000 UTC m=+148.216992879" watchObservedRunningTime="2025-11-24 21:09:36.156231619 +0000 UTC m=+148.238818289" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.157485 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s6zhb" podStartSLOduration=127.157479903 podStartE2EDuration="2m7.157479903s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.155030888 +0000 UTC m=+148.237617568" watchObservedRunningTime="2025-11-24 21:09:36.157479903 +0000 UTC m=+148.240066573" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.193038 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.193271 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.693235369 +0000 UTC m=+148.775822029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.193515 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.193899 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.693883392 +0000 UTC m=+148.776470062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.217127 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rz5j7" podStartSLOduration=8.217104691 podStartE2EDuration="8.217104691s" podCreationTimestamp="2025-11-24 21:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.214987166 +0000 UTC m=+148.297573836" watchObservedRunningTime="2025-11-24 21:09:36.217104691 +0000 UTC m=+148.299691361" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.231692 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wtj6n" podStartSLOduration=127.231672278 podStartE2EDuration="2m7.231672278s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.231279955 +0000 UTC m=+148.313866625" watchObservedRunningTime="2025-11-24 21:09:36.231672278 +0000 UTC m=+148.314258948" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.248658 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mpfrd" podStartSLOduration=127.248637179 podStartE2EDuration="2m7.248637179s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.248350719 +0000 UTC m=+148.330937389" watchObservedRunningTime="2025-11-24 21:09:36.248637179 +0000 UTC m=+148.331223849" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.294255 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.294492 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.794469006 +0000 UTC m=+148.877055676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.294660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.295009 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.794998615 +0000 UTC m=+148.877585285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.296865 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-x7kmk" podStartSLOduration=8.296828578 podStartE2EDuration="8.296828578s" podCreationTimestamp="2025-11-24 21:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.27076269 +0000 UTC m=+148.353349370" watchObservedRunningTime="2025-11-24 21:09:36.296828578 +0000 UTC m=+148.379415248" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.297115 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgbf4" podStartSLOduration=128.297110348 podStartE2EDuration="2m8.297110348s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.29401836 +0000 UTC m=+148.376605040" watchObservedRunningTime="2025-11-24 21:09:36.297110348 +0000 UTC m=+148.379697018" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.315356 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" podStartSLOduration=127.315341284 podStartE2EDuration="2m7.315341284s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.313582342 +0000 UTC m=+148.396169012" watchObservedRunningTime="2025-11-24 21:09:36.315341284 +0000 UTC m=+148.397927954" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.329585 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-khn9r" podStartSLOduration=127.329569639 podStartE2EDuration="2m7.329569639s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.329057662 +0000 UTC m=+148.411644332" watchObservedRunningTime="2025-11-24 21:09:36.329569639 +0000 UTC m=+148.412156309" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.348255 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" podStartSLOduration=127.34823926 podStartE2EDuration="2m7.34823926s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.34507707 +0000 UTC m=+148.427663740" watchObservedRunningTime="2025-11-24 21:09:36.34823926 +0000 UTC m=+148.430825930" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.368195 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wr8kc" podStartSLOduration=127.368178275 podStartE2EDuration="2m7.368178275s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.36605918 +0000 UTC m=+148.448645850" watchObservedRunningTime="2025-11-24 21:09:36.368178275 +0000 UTC m=+148.450764945" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.395752 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.396164 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.896149679 +0000 UTC m=+148.978736349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.408325 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:36 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:36 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:36 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.408445 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.497955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.498400 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:36.998378661 +0000 UTC m=+149.080965331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.598889 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.599160 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.099129672 +0000 UTC m=+149.181716332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.599918 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.600078 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.600238 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.600485 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.100460118 +0000 UTC m=+149.183046968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.607022 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.608542 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.702378 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.702623 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.202564626 +0000 UTC m=+149.285151296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.702663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.702740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.702788 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.703325 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.203301262 +0000 UTC m=+149.285887972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.708282 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.718329 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.725230 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.783963 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.803505 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.804089 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.304053472 +0000 UTC m=+149.386640312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.808459 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" event={"ID":"ad527301-9226-42f3-a4e2-e10fde60c564","Type":"ContainerStarted","Data":"9fea7416de48d49a0367d15fabb6cf8c4ac588cf1bf020bcce58ba4359d1afa0"} Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809833 4801 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4jjjc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809847 4801 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kl9g5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809875 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809889 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809942 4801 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j2gwl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809956 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" podUID="1d18ec0f-c3fb-43dc-99a7-e896cf2789a8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.809988 4801 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8r2wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.810012 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" podUID="dac2b280-3c5e-43ff-9b5e-b46040ca4904" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.810386 4801 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tpspr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.810410 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" podUID="433b7c07-9a1f-4eb1-a82c-2d822e4af130" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.828636 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck7jb" podStartSLOduration=127.828612288 podStartE2EDuration="2m7.828612288s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.827261661 +0000 UTC m=+148.909848331" watchObservedRunningTime="2025-11-24 21:09:36.828612288 +0000 UTC m=+148.911198958" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.863140 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l2sjb" podStartSLOduration=8.86310782 podStartE2EDuration="8.86310782s" podCreationTimestamp="2025-11-24 21:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.861793984 +0000 UTC m=+148.944380664" watchObservedRunningTime="2025-11-24 21:09:36.86310782 +0000 UTC m=+148.945694490" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.900426 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2njc8" podStartSLOduration=127.90039883 podStartE2EDuration="2m7.90039883s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:36.899516139 +0000 UTC m=+148.982102809" watchObservedRunningTime="2025-11-24 21:09:36.90039883 +0000 UTC m=+148.982985510" Nov 24 21:09:36 crc kubenswrapper[4801]: I1124 21:09:36.905192 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:36 crc kubenswrapper[4801]: E1124 21:09:36.958057 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.458034708 +0000 UTC m=+149.540621378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.006590 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.006892 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.007079 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.507063186 +0000 UTC m=+149.589649856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.007221 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.007557 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.507547733 +0000 UTC m=+149.590134393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.065053 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8qqbn" podStartSLOduration=128.065008835 podStartE2EDuration="2m8.065008835s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:37.004818258 +0000 UTC m=+149.087404918" watchObservedRunningTime="2025-11-24 21:09:37.065008835 +0000 UTC m=+149.147595505" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.107855 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.108216 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.60819878 +0000 UTC m=+149.690785450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.209301 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.209726 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.709707867 +0000 UTC m=+149.792294537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.295351 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.310608 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.310957 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.810943115 +0000 UTC m=+149.893529785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.413710 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.414316 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:37.914295796 +0000 UTC m=+149.996882466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.415809 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:37 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:37 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:37 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.415850 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.515469 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.515650 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.015625337 +0000 UTC m=+150.098212007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.516128 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.516470 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.016462855 +0000 UTC m=+150.099049525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.617048 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.617237 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.117209106 +0000 UTC m=+150.199795776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.617281 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.617630 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.11762252 +0000 UTC m=+150.200209190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.718057 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.718198 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.218169324 +0000 UTC m=+150.300755994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.718347 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.718719 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.218705142 +0000 UTC m=+150.301291812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.819501 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.819792 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.319744793 +0000 UTC m=+150.402331473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.820212 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.820564 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.320550622 +0000 UTC m=+150.403137292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.834353 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4e228d48893f299e98f8d0ad6ce2b9a98cc42f4c727e459c80af5b72b388d18c"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.834429 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7fc527ed71561ac94c7cdc382ebba36ae7aa8316580bee5cbee0e10ea86e93f2"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.836879 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" event={"ID":"90e9ba98-83c9-424e-bba8-ddef1a9354cf","Type":"ContainerStarted","Data":"bca9eccd4348ee01e3179daa2211a748fc12431688ee9445fb8b5ed12e5215d5"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.840202 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" event={"ID":"ad527301-9226-42f3-a4e2-e10fde60c564","Type":"ContainerStarted","Data":"37d5b31e83709342cde2ee9fa50feec3eea9045890b993dedd6ce27fc10df3e6"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.844800 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7fe7d46c6583d67f1ea87eaccd25ff119fa883c18fc4b03bb481d2dfaad38999"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.844831 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"55cba05c2717f09ba5760a3366b8881c2d011fc2e9c007c7eda9ee22caba3c39"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.852795 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d845a7b2199d836da7be1429f541fe131362baf7e72dc6efb39ee8ec28b3ff89"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.852859 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"159f9e1ad5816d84e0c6106274ed1fcfb7d3980da0c44cdaae8af5402a4ff980"} Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.853936 4801 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kl9g5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.854046 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.857483 4801 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tpspr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.857532 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" podUID="433b7c07-9a1f-4eb1-a82c-2d822e4af130" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.857483 4801 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4jjjc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.857579 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.857496 4801 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p42zd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.857614 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" podUID="cdb2d2a4-a011-4a8d-988a-d24b129ab9f0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.920872 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.921054 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.421031003 +0000 UTC m=+150.503617673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:37 crc kubenswrapper[4801]: I1124 21:09:37.921247 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:37 crc kubenswrapper[4801]: E1124 21:09:37.923078 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.423065933 +0000 UTC m=+150.505652593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.022548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.022843 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.52282624 +0000 UTC m=+150.605412910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.124057 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.124661 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.624636217 +0000 UTC m=+150.707222887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.225333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.225519 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.725493731 +0000 UTC m=+150.808080391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.225572 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.225905 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.725893385 +0000 UTC m=+150.808480055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.326958 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.327352 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.827128393 +0000 UTC m=+150.909715063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.327436 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.327742 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.827734794 +0000 UTC m=+150.910321464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.413436 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:38 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:38 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:38 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.413515 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.428506 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.428702 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.928676441 +0000 UTC m=+151.011263111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.428811 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.429112 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:38.929100365 +0000 UTC m=+151.011687035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.530010 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.530265 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.030231049 +0000 UTC m=+151.112817729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.531016 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.531374 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.031351549 +0000 UTC m=+151.113938429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.632215 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.632458 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.132418271 +0000 UTC m=+151.215004941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.632856 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.633342 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.133331383 +0000 UTC m=+151.215918043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.694259 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" podStartSLOduration=130.694241385 podStartE2EDuration="2m10.694241385s" podCreationTimestamp="2025-11-24 21:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:37.863104425 +0000 UTC m=+149.945691095" watchObservedRunningTime="2025-11-24 21:09:38.694241385 +0000 UTC m=+150.776828055" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.733882 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.734040 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.234017141 +0000 UTC m=+151.316603811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.734213 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.734621 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.234613971 +0000 UTC m=+151.317200641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.835653 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.835851 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.335819458 +0000 UTC m=+151.418406128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.836015 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.836345 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.336334336 +0000 UTC m=+151.418921006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.854031 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.855786 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.864729 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.864963 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.871706 4801 generic.go:334] "Generic (PLEG): container finished" podID="ac8c2382-6e5d-49cb-9028-43b797a70879" containerID="bce23eb994a8eaf6aef36947cba09e5e22f74471367ad73df05113d1b6c0c7db" exitCode=0 Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.872637 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" event={"ID":"ac8c2382-6e5d-49cb-9028-43b797a70879","Type":"ContainerDied","Data":"bce23eb994a8eaf6aef36947cba09e5e22f74471367ad73df05113d1b6c0c7db"} Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.872939 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.884986 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.936882 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.937237 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:38 crc kubenswrapper[4801]: I1124 21:09:38.937277 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:38 crc kubenswrapper[4801]: E1124 21:09:38.937676 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.437660137 +0000 UTC m=+151.520246807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.039151 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.039444 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.039544 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.039586 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.039876 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.539853907 +0000 UTC m=+151.622440577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.076940 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.141377 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.141838 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.64180965 +0000 UTC m=+151.724396320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.185950 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.243446 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.244396 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.744359363 +0000 UTC m=+151.826946033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.345526 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.345923 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.845908641 +0000 UTC m=+151.928495311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.412158 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:39 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:39 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:39 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.412241 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.421736 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.446855 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.447197 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:39.947183991 +0000 UTC m=+152.029770661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.548058 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.548266 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.048231992 +0000 UTC m=+152.130818662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.548975 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.549346 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.049335219 +0000 UTC m=+152.131921889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.612707 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p42zd" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.649845 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.650384 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.150340089 +0000 UTC m=+152.232926759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.752007 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.752439 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.252422876 +0000 UTC m=+152.335009546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.853610 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.853795 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.353766658 +0000 UTC m=+152.436353328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.853866 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.854473 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.354465701 +0000 UTC m=+152.437052371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.877464 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a","Type":"ContainerStarted","Data":"09e667dc1c068d876addf7f3e96f280c4a7027559831b6e5c899962128ffef43"} Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.930108 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fkzh7"] Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.931088 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.936930 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.948466 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkzh7"] Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.954904 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.955095 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.455065377 +0000 UTC m=+152.537652047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:39 crc kubenswrapper[4801]: I1124 21:09:39.955349 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:39 crc kubenswrapper[4801]: E1124 21:09:39.955623 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.455616226 +0000 UTC m=+152.538202896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.057178 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.057484 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.557450975 +0000 UTC m=+152.640037655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.058155 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-catalog-content\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.058194 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxb7\" (UniqueName: \"kubernetes.io/projected/20e2e67e-3f3e-404b-b489-8b498ba44334-kube-api-access-dvxb7\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.058231 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-utilities\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.058297 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.059649 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.55963357 +0000 UTC m=+152.642220250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.129784 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hmrp"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.130743 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.132701 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.140602 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.141115 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hmrp"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.159602 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.159842 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.659808231 +0000 UTC m=+152.742394911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.159889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.159980 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-catalog-content\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.160024 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxb7\" (UniqueName: \"kubernetes.io/projected/20e2e67e-3f3e-404b-b489-8b498ba44334-kube-api-access-dvxb7\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.160068 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-utilities\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.160588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-catalog-content\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.160650 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.66063735 +0000 UTC m=+152.743224020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.160770 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-utilities\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.190978 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxb7\" (UniqueName: \"kubernetes.io/projected/20e2e67e-3f3e-404b-b489-8b498ba44334-kube-api-access-dvxb7\") pod \"certified-operators-fkzh7\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.243859 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.261970 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.262045 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8c2382-6e5d-49cb-9028-43b797a70879-secret-volume\") pod \"ac8c2382-6e5d-49cb-9028-43b797a70879\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.262133 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bq9b\" (UniqueName: \"kubernetes.io/projected/ac8c2382-6e5d-49cb-9028-43b797a70879-kube-api-access-6bq9b\") pod \"ac8c2382-6e5d-49cb-9028-43b797a70879\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.262310 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8c2382-6e5d-49cb-9028-43b797a70879-config-volume\") pod \"ac8c2382-6e5d-49cb-9028-43b797a70879\" (UID: \"ac8c2382-6e5d-49cb-9028-43b797a70879\") " Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.262733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-utilities\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.262780 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwbr\" (UniqueName: \"kubernetes.io/projected/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-kube-api-access-7xwbr\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.262813 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-catalog-content\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.263056 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.763030217 +0000 UTC m=+152.845617127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.267251 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8c2382-6e5d-49cb-9028-43b797a70879-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac8c2382-6e5d-49cb-9028-43b797a70879" (UID: "ac8c2382-6e5d-49cb-9028-43b797a70879"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.267354 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8c2382-6e5d-49cb-9028-43b797a70879-kube-api-access-6bq9b" (OuterVolumeSpecName: "kube-api-access-6bq9b") pod "ac8c2382-6e5d-49cb-9028-43b797a70879" (UID: "ac8c2382-6e5d-49cb-9028-43b797a70879"). InnerVolumeSpecName "kube-api-access-6bq9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.271819 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2382-6e5d-49cb-9028-43b797a70879-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac8c2382-6e5d-49cb-9028-43b797a70879" (UID: "ac8c2382-6e5d-49cb-9028-43b797a70879"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.353019 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wwqg7"] Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.353697 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8c2382-6e5d-49cb-9028-43b797a70879" containerName="collect-profiles" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.353717 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8c2382-6e5d-49cb-9028-43b797a70879" containerName="collect-profiles" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.353818 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8c2382-6e5d-49cb-9028-43b797a70879" containerName="collect-profiles" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.354564 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.364004 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-utilities\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.364680 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-utilities\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365067 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xwbr\" (UniqueName: \"kubernetes.io/projected/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-kube-api-access-7xwbr\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365124 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-catalog-content\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365170 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365272 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8c2382-6e5d-49cb-9028-43b797a70879-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365289 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8c2382-6e5d-49cb-9028-43b797a70879-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365299 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bq9b\" (UniqueName: \"kubernetes.io/projected/ac8c2382-6e5d-49cb-9028-43b797a70879-kube-api-access-6bq9b\") on node \"crc\" DevicePath \"\"" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.365647 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-catalog-content\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.365691 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.865673335 +0000 UTC m=+152.948260005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.396778 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwqg7"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.406024 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.417024 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:40 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:40 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:40 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.417116 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.433577 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xwbr\" (UniqueName: \"kubernetes.io/projected/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-kube-api-access-7xwbr\") pod \"community-operators-6hmrp\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.460037 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.473017 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.473356 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-utilities\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.473404 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcfph\" (UniqueName: \"kubernetes.io/projected/2b164446-8c7c-479a-84cf-c68680598934-kube-api-access-rcfph\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.473465 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-catalog-content\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.474394 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:40.974377702 +0000 UTC m=+153.056964362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.476897 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.477415 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.501720 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.538975 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fsqp9"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.540527 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.571047 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsqp9"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.575602 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-utilities\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.575662 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcfph\" (UniqueName: \"kubernetes.io/projected/2b164446-8c7c-479a-84cf-c68680598934-kube-api-access-rcfph\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.575704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.575760 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-catalog-content\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.576265 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-catalog-content\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.578252 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-utilities\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.585150 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.0851175 +0000 UTC m=+153.167704170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.616726 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcfph\" (UniqueName: \"kubernetes.io/projected/2b164446-8c7c-479a-84cf-c68680598934-kube-api-access-rcfph\") pod \"certified-operators-wwqg7\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.628918 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkzh7"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.678208 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.678591 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-utilities\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.678625 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-catalog-content\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.678725 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdw52\" (UniqueName: \"kubernetes.io/projected/db096112-8256-4335-8a37-c03fbc570dd9-kube-api-access-rdw52\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.678854 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.178832276 +0000 UTC m=+153.261418946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.707144 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.781668 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-utilities\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.781981 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-catalog-content\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.782014 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.782088 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdw52\" (UniqueName: \"kubernetes.io/projected/db096112-8256-4335-8a37-c03fbc570dd9-kube-api-access-rdw52\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.782760 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-utilities\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.782967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-catalog-content\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.783229 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.283218264 +0000 UTC m=+153.365804934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.784899 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hmrp"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.803157 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdw52\" (UniqueName: \"kubernetes.io/projected/db096112-8256-4335-8a37-c03fbc570dd9-kube-api-access-rdw52\") pod \"community-operators-fsqp9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: W1124 21:09:40.806501 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5157ae_cc6c_41a3_a372_98ce4cd31e7b.slice/crio-2a1f0134af674e36cd8228ca7c63d02a2a521b25313ffc953d0d5321672ff5c7 WatchSource:0}: Error finding container 2a1f0134af674e36cd8228ca7c63d02a2a521b25313ffc953d0d5321672ff5c7: Status 404 returned error can't find the container with id 2a1f0134af674e36cd8228ca7c63d02a2a521b25313ffc953d0d5321672ff5c7 Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.882795 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.882921 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.382897457 +0000 UTC m=+153.465484117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.883043 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.883326 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.383314641 +0000 UTC m=+153.465901311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.892298 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkzh7" event={"ID":"20e2e67e-3f3e-404b-b489-8b498ba44334","Type":"ContainerStarted","Data":"1d1caaeb534c6dce815dfb10289fefba29f954be51e7c89443a4926723ce6cd0"} Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.893728 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a","Type":"ContainerStarted","Data":"2b635f0ee06c72b68346f9693ef6da49215867aad62c426fabc53c9d4154325a"} Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.909191 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" event={"ID":"ac8c2382-6e5d-49cb-9028-43b797a70879","Type":"ContainerDied","Data":"cd7b077dd95438090804c8bc3ace14184dd5787b17e77c8396ec4516e6fe870c"} Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.909237 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7b077dd95438090804c8bc3ace14184dd5787b17e77c8396ec4516e6fe870c" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.909338 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.913504 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hmrp" event={"ID":"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b","Type":"ContainerStarted","Data":"2a1f0134af674e36cd8228ca7c63d02a2a521b25313ffc953d0d5321672ff5c7"} Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.913877 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.917977 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.917966438 podStartE2EDuration="2.917966438s" podCreationTimestamp="2025-11-24 21:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:40.917878605 +0000 UTC m=+153.000465275" watchObservedRunningTime="2025-11-24 21:09:40.917966438 +0000 UTC m=+153.000553108" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.923483 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg997" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.963117 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8r2wf" Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.964297 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwqg7"] Nov 24 21:09:40 crc kubenswrapper[4801]: I1124 21:09:40.985443 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:40 crc kubenswrapper[4801]: E1124 21:09:40.986647 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.486623861 +0000 UTC m=+153.569210531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.087656 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.088328 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.588301914 +0000 UTC m=+153.670888584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.119475 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2gwl" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.191512 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.193181 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.693152387 +0000 UTC m=+153.775739057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.196073 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.196154 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.197635 4801 patch_prober.go:28] interesting pod/console-f9d7485db-khn9r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.197717 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-khn9r" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.222674 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.223937 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-mpfrd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.224025 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mpfrd" podUID="702dc398-86c5-4d2b-bc25-e8464ed36961" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.225776 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-mpfrd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.225799 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-mpfrd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.225889 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mpfrd" podUID="702dc398-86c5-4d2b-bc25-e8464ed36961" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.225886 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mpfrd" podUID="702dc398-86c5-4d2b-bc25-e8464ed36961" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.293542 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.293893 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.793880397 +0000 UTC m=+153.876467067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.333257 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsqp9"] Nov 24 21:09:41 crc kubenswrapper[4801]: W1124 21:09:41.345356 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb096112_8256_4335_8a37_c03fbc570dd9.slice/crio-feded5aa2850f9d199203aa37aed36735d38a7305a37ffc78125b36d930f9ad6 WatchSource:0}: Error finding container feded5aa2850f9d199203aa37aed36735d38a7305a37ffc78125b36d930f9ad6: Status 404 returned error can't find the container with id feded5aa2850f9d199203aa37aed36735d38a7305a37ffc78125b36d930f9ad6 Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.395132 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.396553 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.896517114 +0000 UTC m=+153.979103784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.407492 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:41 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:41 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:41 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.407557 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.417794 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpspr" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.497113 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.497586 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:41.997567154 +0000 UTC m=+154.080153824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.512544 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.599034 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.599486 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.099453314 +0000 UTC m=+154.182039984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.666905 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.667073 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.686218 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.700925 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.701470 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.201449179 +0000 UTC m=+154.284035849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.801963 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.803178 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.303154932 +0000 UTC m=+154.385741602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.904574 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:41 crc kubenswrapper[4801]: E1124 21:09:41.904997 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.40497849 +0000 UTC m=+154.487565160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.905587 4801 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.921666 4801 generic.go:334] "Generic (PLEG): container finished" podID="db096112-8256-4335-8a37-c03fbc570dd9" containerID="734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89" exitCode=0 Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.921743 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsqp9" event={"ID":"db096112-8256-4335-8a37-c03fbc570dd9","Type":"ContainerDied","Data":"734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.921772 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsqp9" event={"ID":"db096112-8256-4335-8a37-c03fbc570dd9","Type":"ContainerStarted","Data":"feded5aa2850f9d199203aa37aed36735d38a7305a37ffc78125b36d930f9ad6"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.923445 4801 generic.go:334] "Generic (PLEG): container finished" podID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerID="738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307" exitCode=0 Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.923506 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hmrp" event={"ID":"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b","Type":"ContainerDied","Data":"738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.924439 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.927000 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" event={"ID":"90e9ba98-83c9-424e-bba8-ddef1a9354cf","Type":"ContainerStarted","Data":"b262343c46d0c0d6174a1a814c9ec5abaa9a4c60d28194b27289f4ca05f9dc7c"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.927048 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" event={"ID":"90e9ba98-83c9-424e-bba8-ddef1a9354cf","Type":"ContainerStarted","Data":"acf117dcd495800b0278d2e4801d1498429112e5eac502f168bb97b0a75f0e7a"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.927059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" event={"ID":"90e9ba98-83c9-424e-bba8-ddef1a9354cf","Type":"ContainerStarted","Data":"e4a66951a8fa603169a6b958c3960f8c43e38115a1b94f7757c664036ed9574b"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.928721 4801 generic.go:334] "Generic (PLEG): container finished" podID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerID="27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa" exitCode=0 Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.928815 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkzh7" event={"ID":"20e2e67e-3f3e-404b-b489-8b498ba44334","Type":"ContainerDied","Data":"27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.930510 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b164446-8c7c-479a-84cf-c68680598934" containerID="eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f" exitCode=0 Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.930569 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwqg7" event={"ID":"2b164446-8c7c-479a-84cf-c68680598934","Type":"ContainerDied","Data":"eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.930587 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwqg7" event={"ID":"2b164446-8c7c-479a-84cf-c68680598934","Type":"ContainerStarted","Data":"af351f2706493d29d689dd559c2499b916641a40cb60a63bc553e2bc70ea3508"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.932766 4801 generic.go:334] "Generic (PLEG): container finished" podID="676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a" containerID="2b635f0ee06c72b68346f9693ef6da49215867aad62c426fabc53c9d4154325a" exitCode=0 Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.933057 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a","Type":"ContainerDied","Data":"2b635f0ee06c72b68346f9693ef6da49215867aad62c426fabc53c9d4154325a"} Nov 24 21:09:41 crc kubenswrapper[4801]: I1124 21:09:41.998645 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xc9p8" podStartSLOduration=13.998618712 podStartE2EDuration="13.998618712s" podCreationTimestamp="2025-11-24 21:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:41.969483418 +0000 UTC m=+154.052070088" watchObservedRunningTime="2025-11-24 21:09:41.998618712 +0000 UTC m=+154.081205382" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.004410 4801 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xsbt9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]log ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]etcd ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/max-in-flight-filter ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 21:09:42 crc kubenswrapper[4801]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 21:09:42 crc kubenswrapper[4801]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/openshift.io-startinformers ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 21:09:42 crc kubenswrapper[4801]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 21:09:42 crc kubenswrapper[4801]: livez check failed Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.004485 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" podUID="ad527301-9226-42f3-a4e2-e10fde60c564" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.006306 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.006811 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.506789348 +0000 UTC m=+154.589376018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.107763 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.108316 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.608293204 +0000 UTC m=+154.690879874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.127704 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qj9jp"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.129067 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.131407 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.140797 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj9jp"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.209234 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.209515 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.7094702 +0000 UTC m=+154.792056870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.209588 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.210300 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.710293229 +0000 UTC m=+154.792879899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.312078 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.313060 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.813019288 +0000 UTC m=+154.895605998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.313190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8lcp\" (UniqueName: \"kubernetes.io/projected/6568362d-d2ac-41f4-84c8-46a363c8b042-kube-api-access-j8lcp\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.313275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.313360 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-utilities\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.313427 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-catalog-content\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.313809 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.813794115 +0000 UTC m=+154.896380785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.408877 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:42 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:42 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:42 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.408926 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.414430 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.414612 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-utilities\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.414647 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-catalog-content\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.414686 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8lcp\" (UniqueName: \"kubernetes.io/projected/6568362d-d2ac-41f4-84c8-46a363c8b042-kube-api-access-j8lcp\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.415056 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:42.915043523 +0000 UTC m=+154.997630193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.415429 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-utilities\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.415643 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-catalog-content\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.436936 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8lcp\" (UniqueName: \"kubernetes.io/projected/6568362d-d2ac-41f4-84c8-46a363c8b042-kube-api-access-j8lcp\") pod \"redhat-marketplace-qj9jp\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.445423 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.516977 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.517733 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:43.017717241 +0000 UTC m=+155.100303911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.534160 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-clxkx"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.535422 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.574867 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-clxkx"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.618309 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.618662 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:43.118648998 +0000 UTC m=+155.201235668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.639114 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.640668 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.643701 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.644413 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.649747 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.719979 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-utilities\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.720040 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.720089 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-catalog-content\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.720117 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69dv\" (UniqueName: \"kubernetes.io/projected/84ba0c04-c136-47be-b036-5340070a8e23-kube-api-access-j69dv\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.721928 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:09:43.221915136 +0000 UTC m=+155.304501806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-62t9t" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.821481 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.821707 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: E1124 21:09:42.821808 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:09:43.321780575 +0000 UTC m=+155.404367245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.821960 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-catalog-content\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.822021 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.822090 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69dv\" (UniqueName: \"kubernetes.io/projected/84ba0c04-c136-47be-b036-5340070a8e23-kube-api-access-j69dv\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.822246 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-utilities\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.822725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-utilities\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.822999 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-catalog-content\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.831543 4801 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T21:09:41.905636373Z","Handler":null,"Name":""} Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.834970 4801 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.835000 4801 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.849601 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69dv\" (UniqueName: \"kubernetes.io/projected/84ba0c04-c136-47be-b036-5340070a8e23-kube-api-access-j69dv\") pod \"redhat-marketplace-clxkx\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.855073 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.924359 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.924433 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.924490 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.924503 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.931844 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.931878 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.959135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.969772 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj9jp"] Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.983312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-62t9t\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:42 crc kubenswrapper[4801]: I1124 21:09:42.983671 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.008420 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.026259 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.062909 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.137394 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5gn65"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.138854 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.141017 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.172092 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gn65"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.233464 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-utilities\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.233522 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-catalog-content\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.234034 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpz2\" (UniqueName: \"kubernetes.io/projected/2b1cb531-6f5c-4298-b301-dd4a644eaf50-kube-api-access-sxpz2\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.264081 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-clxkx"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.343250 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-utilities\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.343749 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-catalog-content\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.343787 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpz2\" (UniqueName: \"kubernetes.io/projected/2b1cb531-6f5c-4298-b301-dd4a644eaf50-kube-api-access-sxpz2\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.344052 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-utilities\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.344411 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-catalog-content\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.381509 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpz2\" (UniqueName: \"kubernetes.io/projected/2b1cb531-6f5c-4298-b301-dd4a644eaf50-kube-api-access-sxpz2\") pod \"redhat-operators-5gn65\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.414528 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:43 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:43 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:43 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.414576 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.463624 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.492851 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-62t9t"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.549585 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j4gl7"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.558654 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j4gl7"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.560003 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.674198 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.753984 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-utilities\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.754063 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-catalog-content\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.754092 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnwd\" (UniqueName: \"kubernetes.io/projected/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-kube-api-access-fnnwd\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.775312 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.854473 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kubelet-dir\") pod \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.854555 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kube-api-access\") pod \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\" (UID: \"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a\") " Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.854692 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a" (UID: "676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.854849 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-catalog-content\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.854888 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnwd\" (UniqueName: \"kubernetes.io/projected/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-kube-api-access-fnnwd\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.854961 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-utilities\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.855025 4801 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.855498 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-utilities\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.855988 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-catalog-content\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.869494 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a" (UID: "676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.881166 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gn65"] Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.883228 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnwd\" (UniqueName: \"kubernetes.io/projected/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-kube-api-access-fnnwd\") pod \"redhat-operators-j4gl7\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.929265 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:09:43 crc kubenswrapper[4801]: W1124 21:09:43.955292 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b1cb531_6f5c_4298_b301_dd4a644eaf50.slice/crio-d8c65569f7666d959ec9141ab3af7766789c45224637b9661610fe359b68eba4 WatchSource:0}: Error finding container d8c65569f7666d959ec9141ab3af7766789c45224637b9661610fe359b68eba4: Status 404 returned error can't find the container with id d8c65569f7666d959ec9141ab3af7766789c45224637b9661610fe359b68eba4 Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.956432 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.969049 4801 generic.go:334] "Generic (PLEG): container finished" podID="84ba0c04-c136-47be-b036-5340070a8e23" containerID="ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f" exitCode=0 Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.969418 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clxkx" event={"ID":"84ba0c04-c136-47be-b036-5340070a8e23","Type":"ContainerDied","Data":"ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.969492 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clxkx" event={"ID":"84ba0c04-c136-47be-b036-5340070a8e23","Type":"ContainerStarted","Data":"31c6a786233fd2e46d094f0b904d0223df95606af2873d217f9766179aee020e"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.971833 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"173a91e3-91fb-45a8-8ea5-7c01b290b45a","Type":"ContainerStarted","Data":"e83d0a38bc941efda5686b42977d51ccf484e80d35985f898d904d5f99fe2ba3"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.975144 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.975176 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a","Type":"ContainerDied","Data":"09e667dc1c068d876addf7f3e96f280c4a7027559831b6e5c899962128ffef43"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.975229 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e667dc1c068d876addf7f3e96f280c4a7027559831b6e5c899962128ffef43" Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.978658 4801 generic.go:334] "Generic (PLEG): container finished" podID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerID="cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7" exitCode=0 Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.978715 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj9jp" event={"ID":"6568362d-d2ac-41f4-84c8-46a363c8b042","Type":"ContainerDied","Data":"cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.978738 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj9jp" event={"ID":"6568362d-d2ac-41f4-84c8-46a363c8b042","Type":"ContainerStarted","Data":"a3fc7b45316c89150c3dd24f0406460d37e12323796839d5a8c4bf4a210bdc4a"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.982111 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" event={"ID":"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78","Type":"ContainerStarted","Data":"487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.982137 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" event={"ID":"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78","Type":"ContainerStarted","Data":"aff25d1499b0ea88fe1b7d730f0417805dc567de7c18caf41f23c09b502428ad"} Nov 24 21:09:43 crc kubenswrapper[4801]: I1124 21:09:43.982524 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.017109 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" podStartSLOduration=135.017080645 podStartE2EDuration="2m15.017080645s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:44.016337639 +0000 UTC m=+156.098924479" watchObservedRunningTime="2025-11-24 21:09:44.017080645 +0000 UTC m=+156.099667315" Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.220512 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j4gl7"] Nov 24 21:09:44 crc kubenswrapper[4801]: W1124 21:09:44.232885 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef8f5e2_51f6_4ca1_8089_f6347a1efdcd.slice/crio-7c014260a3d16ee97ed1713594c29d0ef02b32968d728596b572d1af3047acc7 WatchSource:0}: Error finding container 7c014260a3d16ee97ed1713594c29d0ef02b32968d728596b572d1af3047acc7: Status 404 returned error can't find the container with id 7c014260a3d16ee97ed1713594c29d0ef02b32968d728596b572d1af3047acc7 Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.408117 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:44 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:44 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:44 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.408559 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.685806 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.990791 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"173a91e3-91fb-45a8-8ea5-7c01b290b45a","Type":"ContainerStarted","Data":"ac9722d6f61d33b5b3e42f03e152983d1471c1c5cf11df37d087fac5f29cbc51"} Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.996909 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerID="8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17" exitCode=0 Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.997743 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gn65" event={"ID":"2b1cb531-6f5c-4298-b301-dd4a644eaf50","Type":"ContainerDied","Data":"8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17"} Nov 24 21:09:44 crc kubenswrapper[4801]: I1124 21:09:44.997781 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gn65" event={"ID":"2b1cb531-6f5c-4298-b301-dd4a644eaf50","Type":"ContainerStarted","Data":"d8c65569f7666d959ec9141ab3af7766789c45224637b9661610fe359b68eba4"} Nov 24 21:09:45 crc kubenswrapper[4801]: I1124 21:09:45.000987 4801 generic.go:334] "Generic (PLEG): container finished" podID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerID="e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b" exitCode=0 Nov 24 21:09:45 crc kubenswrapper[4801]: I1124 21:09:45.001459 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerDied","Data":"e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b"} Nov 24 21:09:45 crc kubenswrapper[4801]: I1124 21:09:45.001643 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerStarted","Data":"7c014260a3d16ee97ed1713594c29d0ef02b32968d728596b572d1af3047acc7"} Nov 24 21:09:45 crc kubenswrapper[4801]: I1124 21:09:45.013165 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.013139862 podStartE2EDuration="3.013139862s" podCreationTimestamp="2025-11-24 21:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:09:45.006715108 +0000 UTC m=+157.089301788" watchObservedRunningTime="2025-11-24 21:09:45.013139862 +0000 UTC m=+157.095726532" Nov 24 21:09:45 crc kubenswrapper[4801]: I1124 21:09:45.408567 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:45 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:45 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:45 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:45 crc kubenswrapper[4801]: I1124 21:09:45.408925 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.014790 4801 generic.go:334] "Generic (PLEG): container finished" podID="173a91e3-91fb-45a8-8ea5-7c01b290b45a" containerID="ac9722d6f61d33b5b3e42f03e152983d1471c1c5cf11df37d087fac5f29cbc51" exitCode=0 Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.014952 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"173a91e3-91fb-45a8-8ea5-7c01b290b45a","Type":"ContainerDied","Data":"ac9722d6f61d33b5b3e42f03e152983d1471c1c5cf11df37d087fac5f29cbc51"} Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.296009 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l2sjb" Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.406331 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:46 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:46 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:46 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.406426 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.679504 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:46 crc kubenswrapper[4801]: I1124 21:09:46.686935 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xsbt9" Nov 24 21:09:47 crc kubenswrapper[4801]: I1124 21:09:47.407890 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:47 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:47 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:47 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:47 crc kubenswrapper[4801]: I1124 21:09:47.408460 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:48 crc kubenswrapper[4801]: I1124 21:09:48.406286 4801 patch_prober.go:28] interesting pod/router-default-5444994796-7wsqf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:09:48 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Nov 24 21:09:48 crc kubenswrapper[4801]: [+]process-running ok Nov 24 21:09:48 crc kubenswrapper[4801]: healthz check failed Nov 24 21:09:48 crc kubenswrapper[4801]: I1124 21:09:48.406348 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7wsqf" podUID="458f9b65-7105-44c8-9322-5045d0087cc0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:09:49 crc kubenswrapper[4801]: I1124 21:09:49.408546 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:49 crc kubenswrapper[4801]: I1124 21:09:49.412133 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7wsqf" Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.194495 4801 patch_prober.go:28] interesting pod/console-f9d7485db-khn9r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.195230 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-khn9r" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.224128 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-mpfrd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.224200 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mpfrd" podUID="702dc398-86c5-4d2b-bc25-e8464ed36961" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.224662 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-mpfrd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.224687 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mpfrd" podUID="702dc398-86c5-4d2b-bc25-e8464ed36961" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.923262 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:51 crc kubenswrapper[4801]: I1124 21:09:51.946902 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3434122b-ad4c-40f8-89fc-8829fd158ae3-metrics-certs\") pod \"network-metrics-daemon-llnf4\" (UID: \"3434122b-ad4c-40f8-89fc-8829fd158ae3\") " pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:52 crc kubenswrapper[4801]: I1124 21:09:52.032722 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-llnf4" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.070873 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.127180 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"173a91e3-91fb-45a8-8ea5-7c01b290b45a","Type":"ContainerDied","Data":"e83d0a38bc941efda5686b42977d51ccf484e80d35985f898d904d5f99fe2ba3"} Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.127235 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83d0a38bc941efda5686b42977d51ccf484e80d35985f898d904d5f99fe2ba3" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.127354 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.261156 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kube-api-access\") pod \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.261339 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kubelet-dir\") pod \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\" (UID: \"173a91e3-91fb-45a8-8ea5-7c01b290b45a\") " Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.261497 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "173a91e3-91fb-45a8-8ea5-7c01b290b45a" (UID: "173a91e3-91fb-45a8-8ea5-7c01b290b45a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.261749 4801 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.268325 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "173a91e3-91fb-45a8-8ea5-7c01b290b45a" (UID: "173a91e3-91fb-45a8-8ea5-7c01b290b45a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.320503 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.320599 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:09:54 crc kubenswrapper[4801]: I1124 21:09:54.363751 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/173a91e3-91fb-45a8-8ea5-7c01b290b45a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:01 crc kubenswrapper[4801]: I1124 21:10:01.210828 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:10:01 crc kubenswrapper[4801]: I1124 21:10:01.215466 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:10:01 crc kubenswrapper[4801]: I1124 21:10:01.235072 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mpfrd" Nov 24 21:10:03 crc kubenswrapper[4801]: I1124 21:10:03.021544 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:10:09 crc kubenswrapper[4801]: E1124 21:10:09.640259 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 21:10:09 crc kubenswrapper[4801]: E1124 21:10:09.641706 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvxb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fkzh7_openshift-marketplace(20e2e67e-3f3e-404b-b489-8b498ba44334): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:10:09 crc kubenswrapper[4801]: E1124 21:10:09.645552 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fkzh7" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" Nov 24 21:10:11 crc kubenswrapper[4801]: I1124 21:10:11.098671 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t4jd" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.087826 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fkzh7" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.159336 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.159571 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnnwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j4gl7_openshift-marketplace(4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.161627 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j4gl7" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.197275 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.197558 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8lcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qj9jp_openshift-marketplace(6568362d-d2ac-41f4-84c8-46a363c8b042): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.198958 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qj9jp" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.260759 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j4gl7" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.261918 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qj9jp" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.276007 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.276421 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdw52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fsqp9_openshift-marketplace(db096112-8256-4335-8a37-c03fbc570dd9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:10:13 crc kubenswrapper[4801]: E1124 21:10:13.278422 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fsqp9" podUID="db096112-8256-4335-8a37-c03fbc570dd9" Nov 24 21:10:13 crc kubenswrapper[4801]: I1124 21:10:13.656074 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-llnf4"] Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.266054 4801 generic.go:334] "Generic (PLEG): container finished" podID="84ba0c04-c136-47be-b036-5340070a8e23" containerID="3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720" exitCode=0 Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.266159 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clxkx" event={"ID":"84ba0c04-c136-47be-b036-5340070a8e23","Type":"ContainerDied","Data":"3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720"} Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.270678 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b164446-8c7c-479a-84cf-c68680598934" containerID="3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80" exitCode=0 Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.270758 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwqg7" event={"ID":"2b164446-8c7c-479a-84cf-c68680598934","Type":"ContainerDied","Data":"3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80"} Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.275412 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-llnf4" event={"ID":"3434122b-ad4c-40f8-89fc-8829fd158ae3","Type":"ContainerStarted","Data":"fc0f6dabc396a4aaf636b699a34cd916587b873af5156e5d3ca788f048363429"} Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.275500 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-llnf4" event={"ID":"3434122b-ad4c-40f8-89fc-8829fd158ae3","Type":"ContainerStarted","Data":"07c33d6a5334aaa1c7f01ed7e51cc7e00463f902402cc3478feac4f1eb31d4ae"} Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.275531 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-llnf4" event={"ID":"3434122b-ad4c-40f8-89fc-8829fd158ae3","Type":"ContainerStarted","Data":"3ffc4c30286d06bc68b49c0e9261bf6a2b1c0b1df7ff001a7d97198d46a31c5c"} Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.280488 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerID="3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be" exitCode=0 Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.280573 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gn65" event={"ID":"2b1cb531-6f5c-4298-b301-dd4a644eaf50","Type":"ContainerDied","Data":"3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be"} Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.284479 4801 generic.go:334] "Generic (PLEG): container finished" podID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerID="a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41" exitCode=0 Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.285444 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hmrp" event={"ID":"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b","Type":"ContainerDied","Data":"a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41"} Nov 24 21:10:14 crc kubenswrapper[4801]: E1124 21:10:14.288434 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fsqp9" podUID="db096112-8256-4335-8a37-c03fbc570dd9" Nov 24 21:10:14 crc kubenswrapper[4801]: I1124 21:10:14.371589 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-llnf4" podStartSLOduration=165.371565667 podStartE2EDuration="2m45.371565667s" podCreationTimestamp="2025-11-24 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:10:14.369399341 +0000 UTC m=+186.451986011" watchObservedRunningTime="2025-11-24 21:10:14.371565667 +0000 UTC m=+186.454152327" Nov 24 21:10:15 crc kubenswrapper[4801]: I1124 21:10:15.300236 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clxkx" event={"ID":"84ba0c04-c136-47be-b036-5340070a8e23","Type":"ContainerStarted","Data":"7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096"} Nov 24 21:10:15 crc kubenswrapper[4801]: I1124 21:10:15.306127 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hmrp" event={"ID":"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b","Type":"ContainerStarted","Data":"466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58"} Nov 24 21:10:15 crc kubenswrapper[4801]: I1124 21:10:15.338429 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-clxkx" podStartSLOduration=2.636976755 podStartE2EDuration="33.338323073s" podCreationTimestamp="2025-11-24 21:09:42 +0000 UTC" firstStartedPulling="2025-11-24 21:09:44.02066022 +0000 UTC m=+156.103246890" lastFinishedPulling="2025-11-24 21:10:14.722006538 +0000 UTC m=+186.804593208" observedRunningTime="2025-11-24 21:10:15.331112961 +0000 UTC m=+187.413699721" watchObservedRunningTime="2025-11-24 21:10:15.338323073 +0000 UTC m=+187.420909783" Nov 24 21:10:15 crc kubenswrapper[4801]: I1124 21:10:15.356702 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hmrp" podStartSLOduration=2.461384748 podStartE2EDuration="35.356675942s" podCreationTimestamp="2025-11-24 21:09:40 +0000 UTC" firstStartedPulling="2025-11-24 21:09:41.92448508 +0000 UTC m=+154.007071740" lastFinishedPulling="2025-11-24 21:10:14.819776264 +0000 UTC m=+186.902362934" observedRunningTime="2025-11-24 21:10:15.349568655 +0000 UTC m=+187.432155365" watchObservedRunningTime="2025-11-24 21:10:15.356675942 +0000 UTC m=+187.439262632" Nov 24 21:10:16 crc kubenswrapper[4801]: I1124 21:10:16.315957 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwqg7" event={"ID":"2b164446-8c7c-479a-84cf-c68680598934","Type":"ContainerStarted","Data":"e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3"} Nov 24 21:10:16 crc kubenswrapper[4801]: I1124 21:10:16.319989 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gn65" event={"ID":"2b1cb531-6f5c-4298-b301-dd4a644eaf50","Type":"ContainerStarted","Data":"ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f"} Nov 24 21:10:16 crc kubenswrapper[4801]: I1124 21:10:16.336683 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wwqg7" podStartSLOduration=2.668711632 podStartE2EDuration="36.336660259s" podCreationTimestamp="2025-11-24 21:09:40 +0000 UTC" firstStartedPulling="2025-11-24 21:09:41.931998411 +0000 UTC m=+154.014585081" lastFinishedPulling="2025-11-24 21:10:15.599947038 +0000 UTC m=+187.682533708" observedRunningTime="2025-11-24 21:10:16.334274696 +0000 UTC m=+188.416861376" watchObservedRunningTime="2025-11-24 21:10:16.336660259 +0000 UTC m=+188.419246929" Nov 24 21:10:16 crc kubenswrapper[4801]: I1124 21:10:16.795654 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:10:16 crc kubenswrapper[4801]: I1124 21:10:16.819348 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5gn65" podStartSLOduration=3.050464998 podStartE2EDuration="33.819328937s" podCreationTimestamp="2025-11-24 21:09:43 +0000 UTC" firstStartedPulling="2025-11-24 21:09:44.998550964 +0000 UTC m=+157.081137634" lastFinishedPulling="2025-11-24 21:10:15.767414903 +0000 UTC m=+187.850001573" observedRunningTime="2025-11-24 21:10:16.374811868 +0000 UTC m=+188.457398528" watchObservedRunningTime="2025-11-24 21:10:16.819328937 +0000 UTC m=+188.901915607" Nov 24 21:10:19 crc kubenswrapper[4801]: I1124 21:10:19.286143 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4jjjc"] Nov 24 21:10:20 crc kubenswrapper[4801]: I1124 21:10:20.461350 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:10:20 crc kubenswrapper[4801]: I1124 21:10:20.461928 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:10:20 crc kubenswrapper[4801]: I1124 21:10:20.671079 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:10:20 crc kubenswrapper[4801]: I1124 21:10:20.709268 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:10:20 crc kubenswrapper[4801]: I1124 21:10:20.709385 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:10:20 crc kubenswrapper[4801]: I1124 21:10:20.756389 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:10:21 crc kubenswrapper[4801]: I1124 21:10:21.401013 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:10:21 crc kubenswrapper[4801]: I1124 21:10:21.406035 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:10:22 crc kubenswrapper[4801]: I1124 21:10:22.076764 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwqg7"] Nov 24 21:10:22 crc kubenswrapper[4801]: I1124 21:10:22.856496 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:10:22 crc kubenswrapper[4801]: I1124 21:10:22.856565 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:10:22 crc kubenswrapper[4801]: I1124 21:10:22.899653 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:10:23 crc kubenswrapper[4801]: I1124 21:10:23.360559 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wwqg7" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="registry-server" containerID="cri-o://e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3" gracePeriod=2 Nov 24 21:10:23 crc kubenswrapper[4801]: I1124 21:10:23.410196 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:10:23 crc kubenswrapper[4801]: I1124 21:10:23.464609 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:10:23 crc kubenswrapper[4801]: I1124 21:10:23.464684 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:10:23 crc kubenswrapper[4801]: I1124 21:10:23.526657 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:10:23 crc kubenswrapper[4801]: I1124 21:10:23.834921 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.002610 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-utilities\") pod \"2b164446-8c7c-479a-84cf-c68680598934\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.002761 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-catalog-content\") pod \"2b164446-8c7c-479a-84cf-c68680598934\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.002805 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcfph\" (UniqueName: \"kubernetes.io/projected/2b164446-8c7c-479a-84cf-c68680598934-kube-api-access-rcfph\") pod \"2b164446-8c7c-479a-84cf-c68680598934\" (UID: \"2b164446-8c7c-479a-84cf-c68680598934\") " Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.003702 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-utilities" (OuterVolumeSpecName: "utilities") pod "2b164446-8c7c-479a-84cf-c68680598934" (UID: "2b164446-8c7c-479a-84cf-c68680598934"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.012804 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b164446-8c7c-479a-84cf-c68680598934-kube-api-access-rcfph" (OuterVolumeSpecName: "kube-api-access-rcfph") pod "2b164446-8c7c-479a-84cf-c68680598934" (UID: "2b164446-8c7c-479a-84cf-c68680598934"). InnerVolumeSpecName "kube-api-access-rcfph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.020201 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcfph\" (UniqueName: \"kubernetes.io/projected/2b164446-8c7c-479a-84cf-c68680598934-kube-api-access-rcfph\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.020236 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.059341 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b164446-8c7c-479a-84cf-c68680598934" (UID: "2b164446-8c7c-479a-84cf-c68680598934"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.121732 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b164446-8c7c-479a-84cf-c68680598934-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.319617 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.319719 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.372733 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b164446-8c7c-479a-84cf-c68680598934" containerID="e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3" exitCode=0 Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.372830 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwqg7" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.372818 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwqg7" event={"ID":"2b164446-8c7c-479a-84cf-c68680598934","Type":"ContainerDied","Data":"e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3"} Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.373033 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwqg7" event={"ID":"2b164446-8c7c-479a-84cf-c68680598934","Type":"ContainerDied","Data":"af351f2706493d29d689dd559c2499b916641a40cb60a63bc553e2bc70ea3508"} Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.373077 4801 scope.go:117] "RemoveContainer" containerID="e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.401686 4801 scope.go:117] "RemoveContainer" containerID="3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.434528 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwqg7"] Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.444395 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.444434 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wwqg7"] Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.488069 4801 scope.go:117] "RemoveContainer" containerID="eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.509631 4801 scope.go:117] "RemoveContainer" containerID="e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3" Nov 24 21:10:24 crc kubenswrapper[4801]: E1124 21:10:24.513822 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3\": container with ID starting with e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3 not found: ID does not exist" containerID="e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.513863 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3"} err="failed to get container status \"e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3\": rpc error: code = NotFound desc = could not find container \"e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3\": container with ID starting with e2045de256617ced86774ede6870f2d07e788642f6e2cd5dcbf6ec7f4611afa3 not found: ID does not exist" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.513911 4801 scope.go:117] "RemoveContainer" containerID="3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80" Nov 24 21:10:24 crc kubenswrapper[4801]: E1124 21:10:24.514272 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80\": container with ID starting with 3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80 not found: ID does not exist" containerID="3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.514298 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80"} err="failed to get container status \"3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80\": rpc error: code = NotFound desc = could not find container \"3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80\": container with ID starting with 3c0abc77b7d48a5d2666cc63be29f4f982e45ad0b197fd250a280ef321b6be80 not found: ID does not exist" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.514315 4801 scope.go:117] "RemoveContainer" containerID="eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f" Nov 24 21:10:24 crc kubenswrapper[4801]: E1124 21:10:24.514934 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f\": container with ID starting with eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f not found: ID does not exist" containerID="eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.515052 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f"} err="failed to get container status \"eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f\": rpc error: code = NotFound desc = could not find container \"eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f\": container with ID starting with eaee66da062be7b912d09664d236d129fca7771f22f9ca697bd0ba64c9d1ea6f not found: ID does not exist" Nov 24 21:10:24 crc kubenswrapper[4801]: I1124 21:10:24.671638 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b164446-8c7c-479a-84cf-c68680598934" path="/var/lib/kubelet/pods/2b164446-8c7c-479a-84cf-c68680598934/volumes" Nov 24 21:10:26 crc kubenswrapper[4801]: I1124 21:10:26.471719 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-clxkx"] Nov 24 21:10:26 crc kubenswrapper[4801]: I1124 21:10:26.472012 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-clxkx" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="registry-server" containerID="cri-o://7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096" gracePeriod=2 Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.385522 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.395997 4801 generic.go:334] "Generic (PLEG): container finished" podID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerID="06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f" exitCode=0 Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.396065 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj9jp" event={"ID":"6568362d-d2ac-41f4-84c8-46a363c8b042","Type":"ContainerDied","Data":"06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f"} Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.402571 4801 generic.go:334] "Generic (PLEG): container finished" podID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerID="02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7" exitCode=0 Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.402663 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkzh7" event={"ID":"20e2e67e-3f3e-404b-b489-8b498ba44334","Type":"ContainerDied","Data":"02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7"} Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.408323 4801 generic.go:334] "Generic (PLEG): container finished" podID="84ba0c04-c136-47be-b036-5340070a8e23" containerID="7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096" exitCode=0 Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.408393 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clxkx" event={"ID":"84ba0c04-c136-47be-b036-5340070a8e23","Type":"ContainerDied","Data":"7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096"} Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.408445 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clxkx" event={"ID":"84ba0c04-c136-47be-b036-5340070a8e23","Type":"ContainerDied","Data":"31c6a786233fd2e46d094f0b904d0223df95606af2873d217f9766179aee020e"} Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.408471 4801 scope.go:117] "RemoveContainer" containerID="7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.408601 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clxkx" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.428909 4801 scope.go:117] "RemoveContainer" containerID="3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.450172 4801 scope.go:117] "RemoveContainer" containerID="ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.464947 4801 scope.go:117] "RemoveContainer" containerID="7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096" Nov 24 21:10:27 crc kubenswrapper[4801]: E1124 21:10:27.465434 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096\": container with ID starting with 7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096 not found: ID does not exist" containerID="7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.465475 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096"} err="failed to get container status \"7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096\": rpc error: code = NotFound desc = could not find container \"7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096\": container with ID starting with 7556d66f56c288af761eb2c8e989cd2ecf7f49309a0d9d8288cd0c7966ba8096 not found: ID does not exist" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.465500 4801 scope.go:117] "RemoveContainer" containerID="3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720" Nov 24 21:10:27 crc kubenswrapper[4801]: E1124 21:10:27.465823 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720\": container with ID starting with 3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720 not found: ID does not exist" containerID="3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.465890 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720"} err="failed to get container status \"3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720\": rpc error: code = NotFound desc = could not find container \"3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720\": container with ID starting with 3bba03e9f00be0d7970aa27d6f60063967837165acf658b8d1cfb604a7bae720 not found: ID does not exist" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.465932 4801 scope.go:117] "RemoveContainer" containerID="ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f" Nov 24 21:10:27 crc kubenswrapper[4801]: E1124 21:10:27.466484 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f\": container with ID starting with ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f not found: ID does not exist" containerID="ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.466541 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f"} err="failed to get container status \"ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f\": rpc error: code = NotFound desc = could not find container \"ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f\": container with ID starting with ca5fd2e19fe727155bb166ac6c69f59928d223f3f10a334bd61ee428c579330f not found: ID does not exist" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.477136 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-utilities\") pod \"84ba0c04-c136-47be-b036-5340070a8e23\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.477194 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-catalog-content\") pod \"84ba0c04-c136-47be-b036-5340070a8e23\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.477502 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j69dv\" (UniqueName: \"kubernetes.io/projected/84ba0c04-c136-47be-b036-5340070a8e23-kube-api-access-j69dv\") pod \"84ba0c04-c136-47be-b036-5340070a8e23\" (UID: \"84ba0c04-c136-47be-b036-5340070a8e23\") " Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.478658 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-utilities" (OuterVolumeSpecName: "utilities") pod "84ba0c04-c136-47be-b036-5340070a8e23" (UID: "84ba0c04-c136-47be-b036-5340070a8e23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.484130 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ba0c04-c136-47be-b036-5340070a8e23-kube-api-access-j69dv" (OuterVolumeSpecName: "kube-api-access-j69dv") pod "84ba0c04-c136-47be-b036-5340070a8e23" (UID: "84ba0c04-c136-47be-b036-5340070a8e23"). InnerVolumeSpecName "kube-api-access-j69dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.495799 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84ba0c04-c136-47be-b036-5340070a8e23" (UID: "84ba0c04-c136-47be-b036-5340070a8e23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.580135 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j69dv\" (UniqueName: \"kubernetes.io/projected/84ba0c04-c136-47be-b036-5340070a8e23-kube-api-access-j69dv\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.580185 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.580196 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ba0c04-c136-47be-b036-5340070a8e23-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.748612 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-clxkx"] Nov 24 21:10:27 crc kubenswrapper[4801]: I1124 21:10:27.759652 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-clxkx"] Nov 24 21:10:28 crc kubenswrapper[4801]: I1124 21:10:28.417702 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerStarted","Data":"d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106"} Nov 24 21:10:28 crc kubenswrapper[4801]: I1124 21:10:28.420323 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkzh7" event={"ID":"20e2e67e-3f3e-404b-b489-8b498ba44334","Type":"ContainerStarted","Data":"cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb"} Nov 24 21:10:28 crc kubenswrapper[4801]: I1124 21:10:28.423454 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj9jp" event={"ID":"6568362d-d2ac-41f4-84c8-46a363c8b042","Type":"ContainerStarted","Data":"5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6"} Nov 24 21:10:28 crc kubenswrapper[4801]: I1124 21:10:28.463767 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fkzh7" podStartSLOduration=3.362139846 podStartE2EDuration="49.463735734s" podCreationTimestamp="2025-11-24 21:09:39 +0000 UTC" firstStartedPulling="2025-11-24 21:09:41.929738243 +0000 UTC m=+154.012324913" lastFinishedPulling="2025-11-24 21:10:28.031334121 +0000 UTC m=+200.113920801" observedRunningTime="2025-11-24 21:10:28.459623683 +0000 UTC m=+200.542210373" watchObservedRunningTime="2025-11-24 21:10:28.463735734 +0000 UTC m=+200.546322414" Nov 24 21:10:28 crc kubenswrapper[4801]: I1124 21:10:28.483538 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qj9jp" podStartSLOduration=2.628004888 podStartE2EDuration="46.483510425s" podCreationTimestamp="2025-11-24 21:09:42 +0000 UTC" firstStartedPulling="2025-11-24 21:09:44.020749183 +0000 UTC m=+156.103335853" lastFinishedPulling="2025-11-24 21:10:27.87625472 +0000 UTC m=+199.958841390" observedRunningTime="2025-11-24 21:10:28.47893302 +0000 UTC m=+200.561519700" watchObservedRunningTime="2025-11-24 21:10:28.483510425 +0000 UTC m=+200.566097106" Nov 24 21:10:28 crc kubenswrapper[4801]: I1124 21:10:28.677295 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ba0c04-c136-47be-b036-5340070a8e23" path="/var/lib/kubelet/pods/84ba0c04-c136-47be-b036-5340070a8e23/volumes" Nov 24 21:10:29 crc kubenswrapper[4801]: I1124 21:10:29.430940 4801 generic.go:334] "Generic (PLEG): container finished" podID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerID="d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106" exitCode=0 Nov 24 21:10:29 crc kubenswrapper[4801]: I1124 21:10:29.431013 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerDied","Data":"d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106"} Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.243969 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.244482 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.304458 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.439839 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerStarted","Data":"674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d"} Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.442226 4801 generic.go:334] "Generic (PLEG): container finished" podID="db096112-8256-4335-8a37-c03fbc570dd9" containerID="cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35" exitCode=0 Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.442351 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsqp9" event={"ID":"db096112-8256-4335-8a37-c03fbc570dd9","Type":"ContainerDied","Data":"cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35"} Nov 24 21:10:30 crc kubenswrapper[4801]: I1124 21:10:30.459331 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j4gl7" podStartSLOduration=2.340375631 podStartE2EDuration="47.459292779s" podCreationTimestamp="2025-11-24 21:09:43 +0000 UTC" firstStartedPulling="2025-11-24 21:09:45.00359779 +0000 UTC m=+157.086184460" lastFinishedPulling="2025-11-24 21:10:30.122514938 +0000 UTC m=+202.205101608" observedRunningTime="2025-11-24 21:10:30.457959557 +0000 UTC m=+202.540546227" watchObservedRunningTime="2025-11-24 21:10:30.459292779 +0000 UTC m=+202.541879489" Nov 24 21:10:31 crc kubenswrapper[4801]: I1124 21:10:31.450442 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsqp9" event={"ID":"db096112-8256-4335-8a37-c03fbc570dd9","Type":"ContainerStarted","Data":"c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f"} Nov 24 21:10:31 crc kubenswrapper[4801]: I1124 21:10:31.470312 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fsqp9" podStartSLOduration=2.534564854 podStartE2EDuration="51.470287283s" podCreationTimestamp="2025-11-24 21:09:40 +0000 UTC" firstStartedPulling="2025-11-24 21:09:41.924064025 +0000 UTC m=+154.006650695" lastFinishedPulling="2025-11-24 21:10:30.859786454 +0000 UTC m=+202.942373124" observedRunningTime="2025-11-24 21:10:31.46830125 +0000 UTC m=+203.550887920" watchObservedRunningTime="2025-11-24 21:10:31.470287283 +0000 UTC m=+203.552873963" Nov 24 21:10:32 crc kubenswrapper[4801]: I1124 21:10:32.447008 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:10:32 crc kubenswrapper[4801]: I1124 21:10:32.447081 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:10:32 crc kubenswrapper[4801]: I1124 21:10:32.504743 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:10:33 crc kubenswrapper[4801]: I1124 21:10:33.930423 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:10:33 crc kubenswrapper[4801]: I1124 21:10:33.931270 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:10:34 crc kubenswrapper[4801]: I1124 21:10:34.972037 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j4gl7" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="registry-server" probeResult="failure" output=< Nov 24 21:10:34 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:10:34 crc kubenswrapper[4801]: > Nov 24 21:10:40 crc kubenswrapper[4801]: I1124 21:10:40.330178 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:10:40 crc kubenswrapper[4801]: I1124 21:10:40.915508 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:10:40 crc kubenswrapper[4801]: I1124 21:10:40.915585 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:10:40 crc kubenswrapper[4801]: I1124 21:10:40.987833 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:10:41 crc kubenswrapper[4801]: I1124 21:10:41.594965 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:10:42 crc kubenswrapper[4801]: I1124 21:10:42.488024 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:10:43 crc kubenswrapper[4801]: I1124 21:10:43.074145 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsqp9"] Nov 24 21:10:43 crc kubenswrapper[4801]: I1124 21:10:43.545870 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fsqp9" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="registry-server" containerID="cri-o://c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f" gracePeriod=2 Nov 24 21:10:43 crc kubenswrapper[4801]: I1124 21:10:43.996350 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.040820 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.077167 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.176244 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdw52\" (UniqueName: \"kubernetes.io/projected/db096112-8256-4335-8a37-c03fbc570dd9-kube-api-access-rdw52\") pod \"db096112-8256-4335-8a37-c03fbc570dd9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.176314 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-catalog-content\") pod \"db096112-8256-4335-8a37-c03fbc570dd9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.176341 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-utilities\") pod \"db096112-8256-4335-8a37-c03fbc570dd9\" (UID: \"db096112-8256-4335-8a37-c03fbc570dd9\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.177452 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-utilities" (OuterVolumeSpecName: "utilities") pod "db096112-8256-4335-8a37-c03fbc570dd9" (UID: "db096112-8256-4335-8a37-c03fbc570dd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.184679 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db096112-8256-4335-8a37-c03fbc570dd9-kube-api-access-rdw52" (OuterVolumeSpecName: "kube-api-access-rdw52") pod "db096112-8256-4335-8a37-c03fbc570dd9" (UID: "db096112-8256-4335-8a37-c03fbc570dd9"). InnerVolumeSpecName "kube-api-access-rdw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.268598 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db096112-8256-4335-8a37-c03fbc570dd9" (UID: "db096112-8256-4335-8a37-c03fbc570dd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.277743 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.277801 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db096112-8256-4335-8a37-c03fbc570dd9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.277817 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdw52\" (UniqueName: \"kubernetes.io/projected/db096112-8256-4335-8a37-c03fbc570dd9-kube-api-access-rdw52\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.318269 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" containerID="cri-o://03887479b476a81195b536b9bf79c4901e63ac4e7012dd2741638f5cb28c2f9d" gracePeriod=15 Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.557988 4801 generic.go:334] "Generic (PLEG): container finished" podID="db096112-8256-4335-8a37-c03fbc570dd9" containerID="c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f" exitCode=0 Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.558124 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsqp9" event={"ID":"db096112-8256-4335-8a37-c03fbc570dd9","Type":"ContainerDied","Data":"c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f"} Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.558185 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsqp9" event={"ID":"db096112-8256-4335-8a37-c03fbc570dd9","Type":"ContainerDied","Data":"feded5aa2850f9d199203aa37aed36735d38a7305a37ffc78125b36d930f9ad6"} Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.558225 4801 scope.go:117] "RemoveContainer" containerID="c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.558507 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsqp9" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.566512 4801 generic.go:334] "Generic (PLEG): container finished" podID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerID="03887479b476a81195b536b9bf79c4901e63ac4e7012dd2741638f5cb28c2f9d" exitCode=0 Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.566630 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" event={"ID":"38662694-befb-4e2c-9a82-e0bc5ae846db","Type":"ContainerDied","Data":"03887479b476a81195b536b9bf79c4901e63ac4e7012dd2741638f5cb28c2f9d"} Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.591060 4801 scope.go:117] "RemoveContainer" containerID="cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.610553 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsqp9"] Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.617898 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fsqp9"] Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.636075 4801 scope.go:117] "RemoveContainer" containerID="734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.656100 4801 scope.go:117] "RemoveContainer" containerID="c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f" Nov 24 21:10:44 crc kubenswrapper[4801]: E1124 21:10:44.658036 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f\": container with ID starting with c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f not found: ID does not exist" containerID="c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.658072 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f"} err="failed to get container status \"c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f\": rpc error: code = NotFound desc = could not find container \"c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f\": container with ID starting with c056b40a9fd5370d29c3f6b07e02111fdf8578ddc35e573d38a012b4b060f03f not found: ID does not exist" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.658103 4801 scope.go:117] "RemoveContainer" containerID="cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35" Nov 24 21:10:44 crc kubenswrapper[4801]: E1124 21:10:44.658573 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35\": container with ID starting with cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35 not found: ID does not exist" containerID="cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.658591 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35"} err="failed to get container status \"cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35\": rpc error: code = NotFound desc = could not find container \"cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35\": container with ID starting with cc48c65f3c61786dfe95449d70cf8e88de1bd68e1df43179d13220d094761e35 not found: ID does not exist" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.658605 4801 scope.go:117] "RemoveContainer" containerID="734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89" Nov 24 21:10:44 crc kubenswrapper[4801]: E1124 21:10:44.664030 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89\": container with ID starting with 734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89 not found: ID does not exist" containerID="734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.664121 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89"} err="failed to get container status \"734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89\": rpc error: code = NotFound desc = could not find container \"734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89\": container with ID starting with 734b68a6d02b175e8839ecdd8175a7427eb8b5b66a23d04166acf87eccfa4b89 not found: ID does not exist" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.687080 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db096112-8256-4335-8a37-c03fbc570dd9" path="/var/lib/kubelet/pods/db096112-8256-4335-8a37-c03fbc570dd9/volumes" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.807498 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.988850 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-router-certs\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.989004 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-cliconfig\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.989043 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-dir\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.989120 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-policies\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.989262 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.990267 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.990319 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-idp-0-file-data\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.990340 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.990439 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-ocp-branding-template\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.990957 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-service-ca\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.991025 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-trusted-ca-bundle\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.991074 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-error\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.991121 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-provider-selection\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.991192 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48n8b\" (UniqueName: \"kubernetes.io/projected/38662694-befb-4e2c-9a82-e0bc5ae846db-kube-api-access-48n8b\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.991233 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-serving-cert\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.991494 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.992341 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.992634 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-login\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.992719 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-session\") pod \"38662694-befb-4e2c-9a82-e0bc5ae846db\" (UID: \"38662694-befb-4e2c-9a82-e0bc5ae846db\") " Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.993338 4801 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.993497 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.993534 4801 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.993622 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.993654 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:44 crc kubenswrapper[4801]: I1124 21:10:44.999792 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.000013 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38662694-befb-4e2c-9a82-e0bc5ae846db-kube-api-access-48n8b" (OuterVolumeSpecName: "kube-api-access-48n8b") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "kube-api-access-48n8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.000090 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.000449 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.001031 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.001524 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.001758 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.002185 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.002609 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "38662694-befb-4e2c-9a82-e0bc5ae846db" (UID: "38662694-befb-4e2c-9a82-e0bc5ae846db"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094566 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094626 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094647 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094667 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094688 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48n8b\" (UniqueName: \"kubernetes.io/projected/38662694-befb-4e2c-9a82-e0bc5ae846db-kube-api-access-48n8b\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094708 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094729 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094748 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.094767 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/38662694-befb-4e2c-9a82-e0bc5ae846db-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.576850 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" event={"ID":"38662694-befb-4e2c-9a82-e0bc5ae846db","Type":"ContainerDied","Data":"8b8607e737c66b14ce1b86a86ac074e578aa119d27aaafb9c227a79691417729"} Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.576907 4801 scope.go:117] "RemoveContainer" containerID="03887479b476a81195b536b9bf79c4901e63ac4e7012dd2741638f5cb28c2f9d" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.577003 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4jjjc" Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.624024 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4jjjc"] Nov 24 21:10:45 crc kubenswrapper[4801]: I1124 21:10:45.632150 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4jjjc"] Nov 24 21:10:46 crc kubenswrapper[4801]: I1124 21:10:46.679406 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" path="/var/lib/kubelet/pods/38662694-befb-4e2c-9a82-e0bc5ae846db/volumes" Nov 24 21:10:46 crc kubenswrapper[4801]: I1124 21:10:46.879189 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j4gl7"] Nov 24 21:10:46 crc kubenswrapper[4801]: I1124 21:10:46.879610 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j4gl7" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="registry-server" containerID="cri-o://674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d" gracePeriod=2 Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.378609 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.542265 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnnwd\" (UniqueName: \"kubernetes.io/projected/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-kube-api-access-fnnwd\") pod \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.543895 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-utilities\") pod \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.543971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-catalog-content\") pod \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\" (UID: \"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd\") " Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.545532 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-utilities" (OuterVolumeSpecName: "utilities") pod "4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" (UID: "4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.549348 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-kube-api-access-fnnwd" (OuterVolumeSpecName: "kube-api-access-fnnwd") pod "4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" (UID: "4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd"). InnerVolumeSpecName "kube-api-access-fnnwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.596431 4801 generic.go:334] "Generic (PLEG): container finished" podID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerID="674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d" exitCode=0 Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.596497 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerDied","Data":"674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d"} Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.596552 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gl7" event={"ID":"4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd","Type":"ContainerDied","Data":"7c014260a3d16ee97ed1713594c29d0ef02b32968d728596b572d1af3047acc7"} Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.596589 4801 scope.go:117] "RemoveContainer" containerID="674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.596781 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gl7" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.627037 4801 scope.go:117] "RemoveContainer" containerID="d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.646454 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnnwd\" (UniqueName: \"kubernetes.io/projected/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-kube-api-access-fnnwd\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.646508 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.649353 4801 scope.go:117] "RemoveContainer" containerID="e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.684440 4801 scope.go:117] "RemoveContainer" containerID="674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d" Nov 24 21:10:47 crc kubenswrapper[4801]: E1124 21:10:47.685945 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d\": container with ID starting with 674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d not found: ID does not exist" containerID="674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.686171 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d"} err="failed to get container status \"674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d\": rpc error: code = NotFound desc = could not find container \"674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d\": container with ID starting with 674d42259f195b7c97a2aae073a4268d16193caee89e70092827189c5933155d not found: ID does not exist" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.686454 4801 scope.go:117] "RemoveContainer" containerID="d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106" Nov 24 21:10:47 crc kubenswrapper[4801]: E1124 21:10:47.687813 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106\": container with ID starting with d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106 not found: ID does not exist" containerID="d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.687864 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106"} err="failed to get container status \"d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106\": rpc error: code = NotFound desc = could not find container \"d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106\": container with ID starting with d87190b4ec64967eda85f157ffe6d9f8d44bf553cda6c1cd10c40f583eb72106 not found: ID does not exist" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.687918 4801 scope.go:117] "RemoveContainer" containerID="e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b" Nov 24 21:10:47 crc kubenswrapper[4801]: E1124 21:10:47.688663 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b\": container with ID starting with e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b not found: ID does not exist" containerID="e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.688726 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b"} err="failed to get container status \"e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b\": rpc error: code = NotFound desc = could not find container \"e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b\": container with ID starting with e6aec0172051d146f0dec587f64b6e90e344710c21b5484d2d0d08137cb3368b not found: ID does not exist" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.692947 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" (UID: "4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.747463 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.937709 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j4gl7"] Nov 24 21:10:47 crc kubenswrapper[4801]: I1124 21:10:47.942770 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j4gl7"] Nov 24 21:10:48 crc kubenswrapper[4801]: I1124 21:10:48.678344 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" path="/var/lib/kubelet/pods/4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd/volumes" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.589642 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-745f6bf96d-7wlrh"] Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590732 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590749 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590763 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a" containerName="pruner" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590772 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a" containerName="pruner" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590787 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590796 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590809 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590817 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590833 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590842 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590854 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173a91e3-91fb-45a8-8ea5-7c01b290b45a" containerName="pruner" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590864 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="173a91e3-91fb-45a8-8ea5-7c01b290b45a" containerName="pruner" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590880 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590888 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590901 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590909 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590921 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590929 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590941 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590949 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590963 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590972 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.590982 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.590990 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.591004 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591013 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="extract-utilities" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.591025 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591034 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: E1124 21:10:52.591048 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591056 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="extract-content" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591176 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="173a91e3-91fb-45a8-8ea5-7c01b290b45a" containerName="pruner" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591188 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b164446-8c7c-479a-84cf-c68680598934" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591199 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="db096112-8256-4335-8a37-c03fbc570dd9" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591208 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef8f5e2-51f6-4ca1-8089-f6347a1efdcd" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591220 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ba0c04-c136-47be-b036-5340070a8e23" containerName="registry-server" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591232 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="38662694-befb-4e2c-9a82-e0bc5ae846db" containerName="oauth-openshift" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591247 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="676bb412-b5c7-4f3a-a8a0-c0f49e7fe60a" containerName="pruner" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.591769 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.595917 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.596575 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.597019 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.597942 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.598051 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.598864 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.599212 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.602895 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.602987 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.603073 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.603557 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.603771 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.611292 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.617940 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-745f6bf96d-7wlrh"] Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.618272 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.624770 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660678 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-session\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660753 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660785 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-login\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660818 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660842 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660873 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6a8404d-653d-41a6-97ec-f8da941a44e8-audit-dir\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660923 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660956 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7txdk\" (UniqueName: \"kubernetes.io/projected/b6a8404d-653d-41a6-97ec-f8da941a44e8-kube-api-access-7txdk\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.660986 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.661016 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-audit-policies\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.661051 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.661074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.661094 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-error\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.661124 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762239 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7txdk\" (UniqueName: \"kubernetes.io/projected/b6a8404d-653d-41a6-97ec-f8da941a44e8-kube-api-access-7txdk\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762266 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762338 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-audit-policies\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762396 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-error\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762491 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762519 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-session\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762542 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762561 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-login\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762580 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762600 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.762628 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6a8404d-653d-41a6-97ec-f8da941a44e8-audit-dir\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.764534 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6a8404d-653d-41a6-97ec-f8da941a44e8-audit-dir\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.772575 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-error\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.772873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.773961 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.775969 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.777013 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.777608 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-session\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.778095 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-login\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.778158 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.779522 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.781652 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-audit-policies\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.782690 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.786169 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6a8404d-653d-41a6-97ec-f8da941a44e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.792891 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7txdk\" (UniqueName: \"kubernetes.io/projected/b6a8404d-653d-41a6-97ec-f8da941a44e8-kube-api-access-7txdk\") pod \"oauth-openshift-745f6bf96d-7wlrh\" (UID: \"b6a8404d-653d-41a6-97ec-f8da941a44e8\") " pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:52 crc kubenswrapper[4801]: I1124 21:10:52.920988 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:53 crc kubenswrapper[4801]: I1124 21:10:53.182055 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-745f6bf96d-7wlrh"] Nov 24 21:10:53 crc kubenswrapper[4801]: I1124 21:10:53.663950 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" event={"ID":"b6a8404d-653d-41a6-97ec-f8da941a44e8","Type":"ContainerStarted","Data":"3cf7dbfbd2623c451c4cebd77bb423b57652dd087364b64fb21b525bf533f7a3"} Nov 24 21:10:53 crc kubenswrapper[4801]: I1124 21:10:53.664495 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:53 crc kubenswrapper[4801]: I1124 21:10:53.664524 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" event={"ID":"b6a8404d-653d-41a6-97ec-f8da941a44e8","Type":"ContainerStarted","Data":"99f0fe2d7a604c47b00b38d47b4decacef87e55373b1831c48485bd5ee00e5a7"} Nov 24 21:10:53 crc kubenswrapper[4801]: I1124 21:10:53.698629 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" podStartSLOduration=34.698597402 podStartE2EDuration="34.698597402s" podCreationTimestamp="2025-11-24 21:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:10:53.696085962 +0000 UTC m=+225.778672662" watchObservedRunningTime="2025-11-24 21:10:53.698597402 +0000 UTC m=+225.781184112" Nov 24 21:10:53 crc kubenswrapper[4801]: I1124 21:10:53.748615 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-745f6bf96d-7wlrh" Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.320333 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.321014 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.321357 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.322939 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.323298 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065" gracePeriod=600 Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.675381 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065" exitCode=0 Nov 24 21:10:54 crc kubenswrapper[4801]: I1124 21:10:54.676575 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065"} Nov 24 21:10:55 crc kubenswrapper[4801]: I1124 21:10:55.688785 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"af707f1810af6f57000f5ab08c007f351bf778badb03b5db24d60d19835c23bf"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.295403 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkzh7"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.302005 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fkzh7" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="registry-server" containerID="cri-o://cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb" gracePeriod=30 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.305425 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hmrp"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.306130 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hmrp" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="registry-server" containerID="cri-o://466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58" gracePeriod=30 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.316172 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kl9g5"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.316479 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" containerID="cri-o://823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6" gracePeriod=30 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.328302 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj9jp"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.328780 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qj9jp" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="registry-server" containerID="cri-o://5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6" gracePeriod=30 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.342804 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5gn65"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.342873 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d2n4x"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.343615 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.344107 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5gn65" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="registry-server" containerID="cri-o://ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" gracePeriod=30 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.359540 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d2n4x"] Nov 24 21:11:23 crc kubenswrapper[4801]: E1124 21:11:23.468253 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f is running failed: container process not found" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 21:11:23 crc kubenswrapper[4801]: E1124 21:11:23.468733 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f is running failed: container process not found" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 21:11:23 crc kubenswrapper[4801]: E1124 21:11:23.469023 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f is running failed: container process not found" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 21:11:23 crc kubenswrapper[4801]: E1124 21:11:23.469057 4801 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5gn65" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="registry-server" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.509232 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd03aa3-b315-4fad-904e-616d00db6ce6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.509305 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhh98\" (UniqueName: \"kubernetes.io/projected/0cd03aa3-b315-4fad-904e-616d00db6ce6-kube-api-access-bhh98\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.509573 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cd03aa3-b315-4fad-904e-616d00db6ce6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.614356 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cd03aa3-b315-4fad-904e-616d00db6ce6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.614922 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd03aa3-b315-4fad-904e-616d00db6ce6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.614958 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhh98\" (UniqueName: \"kubernetes.io/projected/0cd03aa3-b315-4fad-904e-616d00db6ce6-kube-api-access-bhh98\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.617457 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd03aa3-b315-4fad-904e-616d00db6ce6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.629422 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cd03aa3-b315-4fad-904e-616d00db6ce6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.632438 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhh98\" (UniqueName: \"kubernetes.io/projected/0cd03aa3-b315-4fad-904e-616d00db6ce6-kube-api-access-bhh98\") pod \"marketplace-operator-79b997595-d2n4x\" (UID: \"0cd03aa3-b315-4fad-904e-616d00db6ce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.743425 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.794470 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.802465 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.865638 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.870961 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.877757 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.926092 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-catalog-content\") pod \"6568362d-d2ac-41f4-84c8-46a363c8b042\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.926165 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvxb7\" (UniqueName: \"kubernetes.io/projected/20e2e67e-3f3e-404b-b489-8b498ba44334-kube-api-access-dvxb7\") pod \"20e2e67e-3f3e-404b-b489-8b498ba44334\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.926188 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-utilities\") pod \"6568362d-d2ac-41f4-84c8-46a363c8b042\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.926254 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-utilities\") pod \"20e2e67e-3f3e-404b-b489-8b498ba44334\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.926326 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-catalog-content\") pod \"20e2e67e-3f3e-404b-b489-8b498ba44334\" (UID: \"20e2e67e-3f3e-404b-b489-8b498ba44334\") " Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.926391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8lcp\" (UniqueName: \"kubernetes.io/projected/6568362d-d2ac-41f4-84c8-46a363c8b042-kube-api-access-j8lcp\") pod \"6568362d-d2ac-41f4-84c8-46a363c8b042\" (UID: \"6568362d-d2ac-41f4-84c8-46a363c8b042\") " Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.927238 4801 generic.go:334] "Generic (PLEG): container finished" podID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerID="823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6" exitCode=0 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.928163 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.928197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" event={"ID":"12387df0-31c7-4315-b960-ec5ff2e629c6","Type":"ContainerDied","Data":"823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.928231 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kl9g5" event={"ID":"12387df0-31c7-4315-b960-ec5ff2e629c6","Type":"ContainerDied","Data":"bb124cafd7460f3907f24e86fa23cc6bed47f407afe74e899706d434d00fd5dc"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.928269 4801 scope.go:117] "RemoveContainer" containerID="823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.928025 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-utilities" (OuterVolumeSpecName: "utilities") pod "20e2e67e-3f3e-404b-b489-8b498ba44334" (UID: "20e2e67e-3f3e-404b-b489-8b498ba44334"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.930230 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-utilities" (OuterVolumeSpecName: "utilities") pod "6568362d-d2ac-41f4-84c8-46a363c8b042" (UID: "6568362d-d2ac-41f4-84c8-46a363c8b042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.939982 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6568362d-d2ac-41f4-84c8-46a363c8b042-kube-api-access-j8lcp" (OuterVolumeSpecName: "kube-api-access-j8lcp") pod "6568362d-d2ac-41f4-84c8-46a363c8b042" (UID: "6568362d-d2ac-41f4-84c8-46a363c8b042"). InnerVolumeSpecName "kube-api-access-j8lcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.944043 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" exitCode=0 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.944261 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gn65" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.944323 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gn65" event={"ID":"2b1cb531-6f5c-4298-b301-dd4a644eaf50","Type":"ContainerDied","Data":"ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.944437 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gn65" event={"ID":"2b1cb531-6f5c-4298-b301-dd4a644eaf50","Type":"ContainerDied","Data":"d8c65569f7666d959ec9141ab3af7766789c45224637b9661610fe359b68eba4"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.944936 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e2e67e-3f3e-404b-b489-8b498ba44334-kube-api-access-dvxb7" (OuterVolumeSpecName: "kube-api-access-dvxb7") pod "20e2e67e-3f3e-404b-b489-8b498ba44334" (UID: "20e2e67e-3f3e-404b-b489-8b498ba44334"). InnerVolumeSpecName "kube-api-access-dvxb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.952658 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6568362d-d2ac-41f4-84c8-46a363c8b042" (UID: "6568362d-d2ac-41f4-84c8-46a363c8b042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.953457 4801 generic.go:334] "Generic (PLEG): container finished" podID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerID="5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6" exitCode=0 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.953531 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj9jp" event={"ID":"6568362d-d2ac-41f4-84c8-46a363c8b042","Type":"ContainerDied","Data":"5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.953592 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj9jp" event={"ID":"6568362d-d2ac-41f4-84c8-46a363c8b042","Type":"ContainerDied","Data":"a3fc7b45316c89150c3dd24f0406460d37e12323796839d5a8c4bf4a210bdc4a"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.953692 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj9jp" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.958748 4801 generic.go:334] "Generic (PLEG): container finished" podID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerID="466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58" exitCode=0 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.958835 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hmrp" event={"ID":"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b","Type":"ContainerDied","Data":"466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.958865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hmrp" event={"ID":"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b","Type":"ContainerDied","Data":"2a1f0134af674e36cd8228ca7c63d02a2a521b25313ffc953d0d5321672ff5c7"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.959228 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hmrp" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.961269 4801 scope.go:117] "RemoveContainer" containerID="823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6" Nov 24 21:11:23 crc kubenswrapper[4801]: E1124 21:11:23.962730 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6\": container with ID starting with 823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6 not found: ID does not exist" containerID="823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.962924 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6"} err="failed to get container status \"823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6\": rpc error: code = NotFound desc = could not find container \"823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6\": container with ID starting with 823c0b87bc7ae2ec776c28a2d1eb3b4caccf86a572ff6cbe815225057dcda7b6 not found: ID does not exist" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.963108 4801 scope.go:117] "RemoveContainer" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.966803 4801 generic.go:334] "Generic (PLEG): container finished" podID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerID="cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb" exitCode=0 Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.966874 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkzh7" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.966858 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkzh7" event={"ID":"20e2e67e-3f3e-404b-b489-8b498ba44334","Type":"ContainerDied","Data":"cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.967220 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkzh7" event={"ID":"20e2e67e-3f3e-404b-b489-8b498ba44334","Type":"ContainerDied","Data":"1d1caaeb534c6dce815dfb10289fefba29f954be51e7c89443a4926723ce6cd0"} Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.978690 4801 scope.go:117] "RemoveContainer" containerID="3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.996356 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj9jp"] Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.997614 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20e2e67e-3f3e-404b-b489-8b498ba44334" (UID: "20e2e67e-3f3e-404b-b489-8b498ba44334"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:23 crc kubenswrapper[4801]: I1124 21:11:23.999292 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj9jp"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.011055 4801 scope.go:117] "RemoveContainer" containerID="8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027171 4801 scope.go:117] "RemoveContainer" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027659 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-catalog-content\") pod \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027721 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-utilities\") pod \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027760 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-utilities\") pod \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.027764 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f\": container with ID starting with ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f not found: ID does not exist" containerID="ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027816 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f"} err="failed to get container status \"ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f\": rpc error: code = NotFound desc = could not find container \"ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f\": container with ID starting with ca9a565d2cdae37cd44c2492926b70232ec126e305a8b5f90c1c8fdbe36ebf7f not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027855 4801 scope.go:117] "RemoveContainer" containerID="3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027873 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-operator-metrics\") pod \"12387df0-31c7-4315-b960-ec5ff2e629c6\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027903 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xwbr\" (UniqueName: \"kubernetes.io/projected/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-kube-api-access-7xwbr\") pod \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027950 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lbrg\" (UniqueName: \"kubernetes.io/projected/12387df0-31c7-4315-b960-ec5ff2e629c6-kube-api-access-6lbrg\") pod \"12387df0-31c7-4315-b960-ec5ff2e629c6\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.027987 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-trusted-ca\") pod \"12387df0-31c7-4315-b960-ec5ff2e629c6\" (UID: \"12387df0-31c7-4315-b960-ec5ff2e629c6\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028004 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-catalog-content\") pod \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\" (UID: \"ee5157ae-cc6c-41a3-a372-98ce4cd31e7b\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028050 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxpz2\" (UniqueName: \"kubernetes.io/projected/2b1cb531-6f5c-4298-b301-dd4a644eaf50-kube-api-access-sxpz2\") pod \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\" (UID: \"2b1cb531-6f5c-4298-b301-dd4a644eaf50\") " Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028331 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028348 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8lcp\" (UniqueName: \"kubernetes.io/projected/6568362d-d2ac-41f4-84c8-46a363c8b042-kube-api-access-j8lcp\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028375 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028408 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvxb7\" (UniqueName: \"kubernetes.io/projected/20e2e67e-3f3e-404b-b489-8b498ba44334-kube-api-access-dvxb7\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028421 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568362d-d2ac-41f4-84c8-46a363c8b042-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028434 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e2e67e-3f3e-404b-b489-8b498ba44334-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.028775 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-utilities" (OuterVolumeSpecName: "utilities") pod "2b1cb531-6f5c-4298-b301-dd4a644eaf50" (UID: "2b1cb531-6f5c-4298-b301-dd4a644eaf50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.029347 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-utilities" (OuterVolumeSpecName: "utilities") pod "ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" (UID: "ee5157ae-cc6c-41a3-a372-98ce4cd31e7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.030738 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be\": container with ID starting with 3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be not found: ID does not exist" containerID="3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.030882 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be"} err="failed to get container status \"3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be\": rpc error: code = NotFound desc = could not find container \"3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be\": container with ID starting with 3e8040099eda7f48b8932355e845deb68d399fc8a49cf8758636cce0175f59be not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.031024 4801 scope.go:117] "RemoveContainer" containerID="8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.031357 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17\": container with ID starting with 8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17 not found: ID does not exist" containerID="8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.031557 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17"} err="failed to get container status \"8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17\": rpc error: code = NotFound desc = could not find container \"8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17\": container with ID starting with 8888d9080e66ab8a1f71dd3668066d6a61399ff18189e8fd887892dc48c95d17 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.031646 4801 scope.go:117] "RemoveContainer" containerID="5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.031770 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "12387df0-31c7-4315-b960-ec5ff2e629c6" (UID: "12387df0-31c7-4315-b960-ec5ff2e629c6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.032486 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-kube-api-access-7xwbr" (OuterVolumeSpecName: "kube-api-access-7xwbr") pod "ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" (UID: "ee5157ae-cc6c-41a3-a372-98ce4cd31e7b"). InnerVolumeSpecName "kube-api-access-7xwbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.033058 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "12387df0-31c7-4315-b960-ec5ff2e629c6" (UID: "12387df0-31c7-4315-b960-ec5ff2e629c6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.034039 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12387df0-31c7-4315-b960-ec5ff2e629c6-kube-api-access-6lbrg" (OuterVolumeSpecName: "kube-api-access-6lbrg") pod "12387df0-31c7-4315-b960-ec5ff2e629c6" (UID: "12387df0-31c7-4315-b960-ec5ff2e629c6"). InnerVolumeSpecName "kube-api-access-6lbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.034862 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1cb531-6f5c-4298-b301-dd4a644eaf50-kube-api-access-sxpz2" (OuterVolumeSpecName: "kube-api-access-sxpz2") pod "2b1cb531-6f5c-4298-b301-dd4a644eaf50" (UID: "2b1cb531-6f5c-4298-b301-dd4a644eaf50"). InnerVolumeSpecName "kube-api-access-sxpz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.046530 4801 scope.go:117] "RemoveContainer" containerID="06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.066464 4801 scope.go:117] "RemoveContainer" containerID="cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.080677 4801 scope.go:117] "RemoveContainer" containerID="5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.081238 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6\": container with ID starting with 5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6 not found: ID does not exist" containerID="5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.081317 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6"} err="failed to get container status \"5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6\": rpc error: code = NotFound desc = could not find container \"5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6\": container with ID starting with 5038792b8b03893d142eed22ae7ecdc51bda8993344986cf5390f5098f387bc6 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.081352 4801 scope.go:117] "RemoveContainer" containerID="06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.081703 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f\": container with ID starting with 06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f not found: ID does not exist" containerID="06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.081747 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f"} err="failed to get container status \"06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f\": rpc error: code = NotFound desc = could not find container \"06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f\": container with ID starting with 06982fb58ede1ed8f8e190a1e4b19a33fc032a816d375b0a142377f977af618f not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.081786 4801 scope.go:117] "RemoveContainer" containerID="cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.082051 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7\": container with ID starting with cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7 not found: ID does not exist" containerID="cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.082082 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7"} err="failed to get container status \"cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7\": rpc error: code = NotFound desc = could not find container \"cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7\": container with ID starting with cb4ce9b8d36de51de1ad77b2fa40f31d234ce71987af09dccb38b3f1327309b7 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.082099 4801 scope.go:117] "RemoveContainer" containerID="466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.097122 4801 scope.go:117] "RemoveContainer" containerID="a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.099680 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" (UID: "ee5157ae-cc6c-41a3-a372-98ce4cd31e7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.116044 4801 scope.go:117] "RemoveContainer" containerID="738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.124412 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b1cb531-6f5c-4298-b301-dd4a644eaf50" (UID: "2b1cb531-6f5c-4298-b301-dd4a644eaf50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129441 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129472 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xwbr\" (UniqueName: \"kubernetes.io/projected/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-kube-api-access-7xwbr\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129482 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lbrg\" (UniqueName: \"kubernetes.io/projected/12387df0-31c7-4315-b960-ec5ff2e629c6-kube-api-access-6lbrg\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129491 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12387df0-31c7-4315-b960-ec5ff2e629c6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129501 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129511 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxpz2\" (UniqueName: \"kubernetes.io/projected/2b1cb531-6f5c-4298-b301-dd4a644eaf50-kube-api-access-sxpz2\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129521 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129530 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.129539 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b1cb531-6f5c-4298-b301-dd4a644eaf50-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.134626 4801 scope.go:117] "RemoveContainer" containerID="466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.135258 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58\": container with ID starting with 466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58 not found: ID does not exist" containerID="466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.135301 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58"} err="failed to get container status \"466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58\": rpc error: code = NotFound desc = could not find container \"466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58\": container with ID starting with 466c952d185b671ee5df13c71772bd069a897cadaa553c5115fdaa74a8427c58 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.135336 4801 scope.go:117] "RemoveContainer" containerID="a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.135669 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41\": container with ID starting with a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41 not found: ID does not exist" containerID="a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.135710 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41"} err="failed to get container status \"a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41\": rpc error: code = NotFound desc = could not find container \"a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41\": container with ID starting with a96b28340b2d6b087c8bf529ad7d6017612249066d8466c5a588c6ddeee57d41 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.135739 4801 scope.go:117] "RemoveContainer" containerID="738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.135996 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307\": container with ID starting with 738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307 not found: ID does not exist" containerID="738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.136022 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307"} err="failed to get container status \"738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307\": rpc error: code = NotFound desc = could not find container \"738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307\": container with ID starting with 738ade7e65f4fedbbab3c721bbc9ce559d1689ea5efd82feb738c0cf8d19f307 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.136037 4801 scope.go:117] "RemoveContainer" containerID="cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.152540 4801 scope.go:117] "RemoveContainer" containerID="02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.170245 4801 scope.go:117] "RemoveContainer" containerID="27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.184880 4801 scope.go:117] "RemoveContainer" containerID="cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.185235 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb\": container with ID starting with cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb not found: ID does not exist" containerID="cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.185291 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb"} err="failed to get container status \"cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb\": rpc error: code = NotFound desc = could not find container \"cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb\": container with ID starting with cbc4fad501233c5836e315ccc09a96bbfb32ec265bd809afd70639cdf7229adb not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.185331 4801 scope.go:117] "RemoveContainer" containerID="02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.185638 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7\": container with ID starting with 02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7 not found: ID does not exist" containerID="02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.185675 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7"} err="failed to get container status \"02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7\": rpc error: code = NotFound desc = could not find container \"02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7\": container with ID starting with 02a633ea1695aba3ec1c84eb51d1d701a1f52ff3791ff0f32c61895d1b6554a7 not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.185703 4801 scope.go:117] "RemoveContainer" containerID="27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa" Nov 24 21:11:24 crc kubenswrapper[4801]: E1124 21:11:24.186133 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa\": container with ID starting with 27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa not found: ID does not exist" containerID="27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.186189 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa"} err="failed to get container status \"27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa\": rpc error: code = NotFound desc = could not find container \"27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa\": container with ID starting with 27fd73e3ef9deef2b297e89fd7c163c86b68fd4155bb526c3cfd4641853dc1aa not found: ID does not exist" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.217040 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d2n4x"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.284658 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5gn65"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.286492 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5gn65"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.293389 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kl9g5"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.295792 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kl9g5"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.341889 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hmrp"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.345343 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hmrp"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.349742 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkzh7"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.363656 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fkzh7"] Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.671566 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" path="/var/lib/kubelet/pods/12387df0-31c7-4315-b960-ec5ff2e629c6/volumes" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.672340 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" path="/var/lib/kubelet/pods/20e2e67e-3f3e-404b-b489-8b498ba44334/volumes" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.672949 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" path="/var/lib/kubelet/pods/2b1cb531-6f5c-4298-b301-dd4a644eaf50/volumes" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.674092 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" path="/var/lib/kubelet/pods/6568362d-d2ac-41f4-84c8-46a363c8b042/volumes" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.674830 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" path="/var/lib/kubelet/pods/ee5157ae-cc6c-41a3-a372-98ce4cd31e7b/volumes" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.977571 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" event={"ID":"0cd03aa3-b315-4fad-904e-616d00db6ce6","Type":"ContainerStarted","Data":"fb495c88b44e3742123799e93c9cc4386acd48f57ff696f7582d8ae7e94b0bbb"} Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.977830 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.977903 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" event={"ID":"0cd03aa3-b315-4fad-904e-616d00db6ce6","Type":"ContainerStarted","Data":"eee8051c65b1b116bf8aa5020f78a1a4bd7c37a3abe6972867e8fde20960ab86"} Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.981521 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" Nov 24 21:11:24 crc kubenswrapper[4801]: I1124 21:11:24.997909 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d2n4x" podStartSLOduration=1.997884215 podStartE2EDuration="1.997884215s" podCreationTimestamp="2025-11-24 21:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:11:24.995014543 +0000 UTC m=+257.077601213" watchObservedRunningTime="2025-11-24 21:11:24.997884215 +0000 UTC m=+257.080470885" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.513252 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hlcbx"] Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514062 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514079 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514095 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514103 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514114 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514124 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514133 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514143 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514151 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514158 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514170 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514178 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514188 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514195 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514209 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514217 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514227 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514235 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514250 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514260 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514274 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514284 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="extract-utilities" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514299 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514308 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: E1124 21:11:25.514326 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514334 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="extract-content" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514498 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5157ae-cc6c-41a3-a372-98ce4cd31e7b" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514521 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="12387df0-31c7-4315-b960-ec5ff2e629c6" containerName="marketplace-operator" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514536 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e2e67e-3f3e-404b-b489-8b498ba44334" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514548 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6568362d-d2ac-41f4-84c8-46a363c8b042" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.514565 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1cb531-6f5c-4298-b301-dd4a644eaf50" containerName="registry-server" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.515703 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.522396 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.532202 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlcbx"] Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.551359 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442941ec-61d1-45b6-9932-da2eb28d6a9e-catalog-content\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.551872 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442941ec-61d1-45b6-9932-da2eb28d6a9e-utilities\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.551957 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8l5z\" (UniqueName: \"kubernetes.io/projected/442941ec-61d1-45b6-9932-da2eb28d6a9e-kube-api-access-r8l5z\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.654274 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442941ec-61d1-45b6-9932-da2eb28d6a9e-utilities\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.654423 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8l5z\" (UniqueName: \"kubernetes.io/projected/442941ec-61d1-45b6-9932-da2eb28d6a9e-kube-api-access-r8l5z\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.654480 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442941ec-61d1-45b6-9932-da2eb28d6a9e-catalog-content\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.655215 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442941ec-61d1-45b6-9932-da2eb28d6a9e-utilities\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.655240 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442941ec-61d1-45b6-9932-da2eb28d6a9e-catalog-content\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.677958 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8l5z\" (UniqueName: \"kubernetes.io/projected/442941ec-61d1-45b6-9932-da2eb28d6a9e-kube-api-access-r8l5z\") pod \"redhat-marketplace-hlcbx\" (UID: \"442941ec-61d1-45b6-9932-da2eb28d6a9e\") " pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.718778 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4vwq"] Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.720562 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.724475 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.726380 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4vwq"] Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.756033 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-catalog-content\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.756193 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-utilities\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.756237 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbm74\" (UniqueName: \"kubernetes.io/projected/4943a025-64de-4062-9c0d-51e219ce174b-kube-api-access-dbm74\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.839739 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.857519 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-catalog-content\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.857698 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-utilities\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.857835 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbm74\" (UniqueName: \"kubernetes.io/projected/4943a025-64de-4062-9c0d-51e219ce174b-kube-api-access-dbm74\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.858270 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-catalog-content\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.858387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-utilities\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:25 crc kubenswrapper[4801]: I1124 21:11:25.884551 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbm74\" (UniqueName: \"kubernetes.io/projected/4943a025-64de-4062-9c0d-51e219ce174b-kube-api-access-dbm74\") pod \"certified-operators-d4vwq\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:26 crc kubenswrapper[4801]: I1124 21:11:26.052588 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:26 crc kubenswrapper[4801]: I1124 21:11:26.129923 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlcbx"] Nov 24 21:11:26 crc kubenswrapper[4801]: I1124 21:11:26.473234 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4vwq"] Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.027554 4801 generic.go:334] "Generic (PLEG): container finished" podID="442941ec-61d1-45b6-9932-da2eb28d6a9e" containerID="9976ac27c5a27dcbdb02499e40ccc574789449a6462b2461647d44c5d1ec698a" exitCode=0 Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.028157 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlcbx" event={"ID":"442941ec-61d1-45b6-9932-da2eb28d6a9e","Type":"ContainerDied","Data":"9976ac27c5a27dcbdb02499e40ccc574789449a6462b2461647d44c5d1ec698a"} Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.028221 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlcbx" event={"ID":"442941ec-61d1-45b6-9932-da2eb28d6a9e","Type":"ContainerStarted","Data":"4f016e06b315173670b47f4fa859562849d389876a1859a52806941a79753a46"} Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.036569 4801 generic.go:334] "Generic (PLEG): container finished" podID="4943a025-64de-4062-9c0d-51e219ce174b" containerID="c3d3c595f9c1dc416494e7587daad6f25eccb92d6aa241364b87202c278ecdd9" exitCode=0 Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.038452 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerDied","Data":"c3d3c595f9c1dc416494e7587daad6f25eccb92d6aa241364b87202c278ecdd9"} Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.038484 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerStarted","Data":"67e1e2af0fd00f64d528b4928beac641b1b9f8d86a65f414e38eda840207dd03"} Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.912309 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wlmpd"] Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.920224 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.923248 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 21:11:27 crc kubenswrapper[4801]: I1124 21:11:27.934238 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wlmpd"] Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.036990 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e88ff1-b918-4824-851f-a8b312e78e48-catalog-content\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.037114 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxf7\" (UniqueName: \"kubernetes.io/projected/52e88ff1-b918-4824-851f-a8b312e78e48-kube-api-access-mmxf7\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.037238 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e88ff1-b918-4824-851f-a8b312e78e48-utilities\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.044253 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerStarted","Data":"2303e49a4c54e8efcff3923dccda7aa39488208facf99e424835fa7d970150d9"} Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.114421 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6lp8r"] Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.116058 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.119606 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.126550 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6lp8r"] Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.140385 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e88ff1-b918-4824-851f-a8b312e78e48-utilities\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.140474 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e88ff1-b918-4824-851f-a8b312e78e48-catalog-content\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.140526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxf7\" (UniqueName: \"kubernetes.io/projected/52e88ff1-b918-4824-851f-a8b312e78e48-kube-api-access-mmxf7\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.141194 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e88ff1-b918-4824-851f-a8b312e78e48-catalog-content\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.141266 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e88ff1-b918-4824-851f-a8b312e78e48-utilities\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.164527 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxf7\" (UniqueName: \"kubernetes.io/projected/52e88ff1-b918-4824-851f-a8b312e78e48-kube-api-access-mmxf7\") pod \"redhat-operators-wlmpd\" (UID: \"52e88ff1-b918-4824-851f-a8b312e78e48\") " pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.242255 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-catalog-content\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.242463 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-utilities\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.242564 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tp8l\" (UniqueName: \"kubernetes.io/projected/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-kube-api-access-2tp8l\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.248801 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.345534 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-catalog-content\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.345601 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-utilities\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.345640 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tp8l\" (UniqueName: \"kubernetes.io/projected/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-kube-api-access-2tp8l\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.346106 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-utilities\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.346274 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-catalog-content\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.372427 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tp8l\" (UniqueName: \"kubernetes.io/projected/b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9-kube-api-access-2tp8l\") pod \"community-operators-6lp8r\" (UID: \"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9\") " pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.433187 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.623733 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6lp8r"] Nov 24 21:11:28 crc kubenswrapper[4801]: W1124 21:11:28.627703 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7aa75b0_00a4_4f5b_89fb_d5ffd0ead9f9.slice/crio-2ea5b9d4c998440fbec98596d1c9538c58154dd63454fa43832dae5ddcd45598 WatchSource:0}: Error finding container 2ea5b9d4c998440fbec98596d1c9538c58154dd63454fa43832dae5ddcd45598: Status 404 returned error can't find the container with id 2ea5b9d4c998440fbec98596d1c9538c58154dd63454fa43832dae5ddcd45598 Nov 24 21:11:28 crc kubenswrapper[4801]: I1124 21:11:28.663234 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wlmpd"] Nov 24 21:11:28 crc kubenswrapper[4801]: W1124 21:11:28.667607 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52e88ff1_b918_4824_851f_a8b312e78e48.slice/crio-c8953c4172391612fc0195709aaaa867f182f734b0844975082a0864f136319f WatchSource:0}: Error finding container c8953c4172391612fc0195709aaaa867f182f734b0844975082a0864f136319f: Status 404 returned error can't find the container with id c8953c4172391612fc0195709aaaa867f182f734b0844975082a0864f136319f Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.050920 4801 generic.go:334] "Generic (PLEG): container finished" podID="52e88ff1-b918-4824-851f-a8b312e78e48" containerID="217fc1dacdb20604130c608e1be33fc5b3dff48670a67c507b7d97b343c23eeb" exitCode=0 Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.051046 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmpd" event={"ID":"52e88ff1-b918-4824-851f-a8b312e78e48","Type":"ContainerDied","Data":"217fc1dacdb20604130c608e1be33fc5b3dff48670a67c507b7d97b343c23eeb"} Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.051348 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmpd" event={"ID":"52e88ff1-b918-4824-851f-a8b312e78e48","Type":"ContainerStarted","Data":"c8953c4172391612fc0195709aaaa867f182f734b0844975082a0864f136319f"} Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.055798 4801 generic.go:334] "Generic (PLEG): container finished" podID="b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9" containerID="65636a474dd1db78e377468060f43830a2dd2a99718517ae01faa3e01c9f4f41" exitCode=0 Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.055870 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lp8r" event={"ID":"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9","Type":"ContainerDied","Data":"65636a474dd1db78e377468060f43830a2dd2a99718517ae01faa3e01c9f4f41"} Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.055891 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lp8r" event={"ID":"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9","Type":"ContainerStarted","Data":"2ea5b9d4c998440fbec98596d1c9538c58154dd63454fa43832dae5ddcd45598"} Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.060240 4801 generic.go:334] "Generic (PLEG): container finished" podID="442941ec-61d1-45b6-9932-da2eb28d6a9e" containerID="8dd532919210544d4577407b947ac8007079b5e7e8769a004d6a056cdb36a226" exitCode=0 Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.060312 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlcbx" event={"ID":"442941ec-61d1-45b6-9932-da2eb28d6a9e","Type":"ContainerDied","Data":"8dd532919210544d4577407b947ac8007079b5e7e8769a004d6a056cdb36a226"} Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.063455 4801 generic.go:334] "Generic (PLEG): container finished" podID="4943a025-64de-4062-9c0d-51e219ce174b" containerID="2303e49a4c54e8efcff3923dccda7aa39488208facf99e424835fa7d970150d9" exitCode=0 Nov 24 21:11:29 crc kubenswrapper[4801]: I1124 21:11:29.063500 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerDied","Data":"2303e49a4c54e8efcff3923dccda7aa39488208facf99e424835fa7d970150d9"} Nov 24 21:11:30 crc kubenswrapper[4801]: I1124 21:11:30.094238 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerStarted","Data":"ec3f085764451c4644e36e00396aa7d6cdc054d2b7c454098530c2b219eb8198"} Nov 24 21:11:30 crc kubenswrapper[4801]: I1124 21:11:30.100322 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlcbx" event={"ID":"442941ec-61d1-45b6-9932-da2eb28d6a9e","Type":"ContainerStarted","Data":"82a799c208196cecdcb008b1901165dac08cb9e9c1450fed0c6e534fb41cd38a"} Nov 24 21:11:30 crc kubenswrapper[4801]: I1124 21:11:30.105504 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmpd" event={"ID":"52e88ff1-b918-4824-851f-a8b312e78e48","Type":"ContainerStarted","Data":"09ca40924f3d3244f9ae84f3893da2ce9d5993a92958887420f902725636813b"} Nov 24 21:11:30 crc kubenswrapper[4801]: I1124 21:11:30.122412 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4vwq" podStartSLOduration=2.642830374 podStartE2EDuration="5.122348015s" podCreationTimestamp="2025-11-24 21:11:25 +0000 UTC" firstStartedPulling="2025-11-24 21:11:27.038891531 +0000 UTC m=+259.121478201" lastFinishedPulling="2025-11-24 21:11:29.518409142 +0000 UTC m=+261.600995842" observedRunningTime="2025-11-24 21:11:30.117307252 +0000 UTC m=+262.199893922" watchObservedRunningTime="2025-11-24 21:11:30.122348015 +0000 UTC m=+262.204934685" Nov 24 21:11:30 crc kubenswrapper[4801]: I1124 21:11:30.122695 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lp8r" event={"ID":"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9","Type":"ContainerStarted","Data":"d1a667410bcf4159bc6c875ee27218175d13fb6f3379ef307bbaee0351379e2e"} Nov 24 21:11:30 crc kubenswrapper[4801]: I1124 21:11:30.172993 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hlcbx" podStartSLOduration=2.757048121 podStartE2EDuration="5.172546163s" podCreationTimestamp="2025-11-24 21:11:25 +0000 UTC" firstStartedPulling="2025-11-24 21:11:27.032027392 +0000 UTC m=+259.114614102" lastFinishedPulling="2025-11-24 21:11:29.447525434 +0000 UTC m=+261.530112144" observedRunningTime="2025-11-24 21:11:30.169340655 +0000 UTC m=+262.251927325" watchObservedRunningTime="2025-11-24 21:11:30.172546163 +0000 UTC m=+262.255132833" Nov 24 21:11:31 crc kubenswrapper[4801]: I1124 21:11:31.130839 4801 generic.go:334] "Generic (PLEG): container finished" podID="52e88ff1-b918-4824-851f-a8b312e78e48" containerID="09ca40924f3d3244f9ae84f3893da2ce9d5993a92958887420f902725636813b" exitCode=0 Nov 24 21:11:31 crc kubenswrapper[4801]: I1124 21:11:31.131024 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmpd" event={"ID":"52e88ff1-b918-4824-851f-a8b312e78e48","Type":"ContainerDied","Data":"09ca40924f3d3244f9ae84f3893da2ce9d5993a92958887420f902725636813b"} Nov 24 21:11:31 crc kubenswrapper[4801]: I1124 21:11:31.135396 4801 generic.go:334] "Generic (PLEG): container finished" podID="b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9" containerID="d1a667410bcf4159bc6c875ee27218175d13fb6f3379ef307bbaee0351379e2e" exitCode=0 Nov 24 21:11:31 crc kubenswrapper[4801]: I1124 21:11:31.136445 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lp8r" event={"ID":"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9","Type":"ContainerDied","Data":"d1a667410bcf4159bc6c875ee27218175d13fb6f3379ef307bbaee0351379e2e"} Nov 24 21:11:31 crc kubenswrapper[4801]: I1124 21:11:31.136477 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lp8r" event={"ID":"b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9","Type":"ContainerStarted","Data":"e90082461b6b0597b8e8f9a5f2c7a65bc28b6d5c4ab86e6ff65f7750a8d8456a"} Nov 24 21:11:32 crc kubenswrapper[4801]: I1124 21:11:32.146461 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmpd" event={"ID":"52e88ff1-b918-4824-851f-a8b312e78e48","Type":"ContainerStarted","Data":"40db5a29f096c75e72dc3ab4fb1ccf2c160aebb7ec67fa43d37de38c4be2e1d5"} Nov 24 21:11:32 crc kubenswrapper[4801]: I1124 21:11:32.171275 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wlmpd" podStartSLOduration=2.671379527 podStartE2EDuration="5.171239211s" podCreationTimestamp="2025-11-24 21:11:27 +0000 UTC" firstStartedPulling="2025-11-24 21:11:29.054593573 +0000 UTC m=+261.137180243" lastFinishedPulling="2025-11-24 21:11:31.554453257 +0000 UTC m=+263.637039927" observedRunningTime="2025-11-24 21:11:32.166636822 +0000 UTC m=+264.249223492" watchObservedRunningTime="2025-11-24 21:11:32.171239211 +0000 UTC m=+264.253825891" Nov 24 21:11:32 crc kubenswrapper[4801]: I1124 21:11:32.172328 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6lp8r" podStartSLOduration=2.673265795 podStartE2EDuration="4.172321815s" podCreationTimestamp="2025-11-24 21:11:28 +0000 UTC" firstStartedPulling="2025-11-24 21:11:29.058280666 +0000 UTC m=+261.140867336" lastFinishedPulling="2025-11-24 21:11:30.557336686 +0000 UTC m=+262.639923356" observedRunningTime="2025-11-24 21:11:31.175943966 +0000 UTC m=+263.258530656" watchObservedRunningTime="2025-11-24 21:11:32.172321815 +0000 UTC m=+264.254908495" Nov 24 21:11:35 crc kubenswrapper[4801]: I1124 21:11:35.840599 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:35 crc kubenswrapper[4801]: I1124 21:11:35.841582 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:35 crc kubenswrapper[4801]: I1124 21:11:35.903807 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:36 crc kubenswrapper[4801]: I1124 21:11:36.053557 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:36 crc kubenswrapper[4801]: I1124 21:11:36.053632 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:36 crc kubenswrapper[4801]: I1124 21:11:36.104091 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:36 crc kubenswrapper[4801]: I1124 21:11:36.214479 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hlcbx" Nov 24 21:11:36 crc kubenswrapper[4801]: I1124 21:11:36.218233 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 21:11:38 crc kubenswrapper[4801]: I1124 21:11:38.249596 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:38 crc kubenswrapper[4801]: I1124 21:11:38.250040 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:38 crc kubenswrapper[4801]: I1124 21:11:38.304946 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:38 crc kubenswrapper[4801]: I1124 21:11:38.433963 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:38 crc kubenswrapper[4801]: I1124 21:11:38.434099 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:38 crc kubenswrapper[4801]: I1124 21:11:38.492557 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:39 crc kubenswrapper[4801]: I1124 21:11:39.238622 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6lp8r" Nov 24 21:11:39 crc kubenswrapper[4801]: I1124 21:11:39.240054 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wlmpd" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.054982 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr"] Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.056776 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.059900 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.059931 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.064745 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.065402 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.065737 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.072148 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr"] Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.144020 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.144323 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.144539 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshj4\" (UniqueName: \"kubernetes.io/projected/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-kube-api-access-hshj4\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.246137 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshj4\" (UniqueName: \"kubernetes.io/projected/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-kube-api-access-hshj4\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.246222 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.246268 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.249741 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.260406 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.282137 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshj4\" (UniqueName: \"kubernetes.io/projected/d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb-kube-api-access-hshj4\") pod \"cluster-monitoring-operator-6d5b84845-v8nnr\" (UID: \"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.377075 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" Nov 24 21:11:54 crc kubenswrapper[4801]: I1124 21:11:54.665023 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr"] Nov 24 21:11:55 crc kubenswrapper[4801]: I1124 21:11:55.286908 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" event={"ID":"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb","Type":"ContainerStarted","Data":"f304b4dd4d2a77252b354d697278367413632b957a07e6be7b632e724458d977"} Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.300632 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" event={"ID":"d31dff1b-ac1c-4ce1-8d96-4ee79d7d7dfb","Type":"ContainerStarted","Data":"be9a5fb31bef29451b73935da6436ce4aab1cf2eb6cd65eae7dd9fc35ced54be"} Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.326336 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-v8nnr" podStartSLOduration=1.147219601 podStartE2EDuration="3.326315162s" podCreationTimestamp="2025-11-24 21:11:54 +0000 UTC" firstStartedPulling="2025-11-24 21:11:54.690774398 +0000 UTC m=+286.773361068" lastFinishedPulling="2025-11-24 21:11:56.869869959 +0000 UTC m=+288.952456629" observedRunningTime="2025-11-24 21:11:57.324900499 +0000 UTC m=+289.407487169" watchObservedRunningTime="2025-11-24 21:11:57.326315162 +0000 UTC m=+289.408901832" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.390019 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k5bsn"] Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.391116 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.409930 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k5bsn"] Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.502108 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.502454 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-trusted-ca\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.502762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.502880 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-registry-tls\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.503010 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-bound-sa-token\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.503128 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-registry-certificates\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.504026 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jm7x\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-kube-api-access-4jm7x\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.504191 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.515637 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k"] Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.516335 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.518701 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.518998 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-pwgh5" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.524676 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k"] Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.564073 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606034 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-registry-certificates\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/11d04eba-db44-481f-9fcc-64d1add46ed6-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-55p8k\" (UID: \"11d04eba-db44-481f-9fcc-64d1add46ed6\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606151 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jm7x\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-kube-api-access-4jm7x\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-trusted-ca\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606244 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606263 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-registry-tls\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.606316 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-bound-sa-token\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.607141 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.607983 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-registry-certificates\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.609082 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-trusted-ca\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.615193 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-registry-tls\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.615234 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.623316 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-bound-sa-token\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.623569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jm7x\" (UniqueName: \"kubernetes.io/projected/8e4df16e-186b-4b7b-9b54-c2f972b07b0f-kube-api-access-4jm7x\") pod \"image-registry-66df7c8f76-k5bsn\" (UID: \"8e4df16e-186b-4b7b-9b54-c2f972b07b0f\") " pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.704790 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.707308 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/11d04eba-db44-481f-9fcc-64d1add46ed6-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-55p8k\" (UID: \"11d04eba-db44-481f-9fcc-64d1add46ed6\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.710811 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/11d04eba-db44-481f-9fcc-64d1add46ed6-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-55p8k\" (UID: \"11d04eba-db44-481f-9fcc-64d1add46ed6\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.830548 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:11:57 crc kubenswrapper[4801]: I1124 21:11:57.966671 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k5bsn"] Nov 24 21:11:58 crc kubenswrapper[4801]: I1124 21:11:58.076325 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k"] Nov 24 21:11:58 crc kubenswrapper[4801]: I1124 21:11:58.307804 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" event={"ID":"8e4df16e-186b-4b7b-9b54-c2f972b07b0f","Type":"ContainerStarted","Data":"59d6fdc9a7dc191ed3d5e9b2c79818ce042a149d58a29472971222b144fc5f85"} Nov 24 21:11:58 crc kubenswrapper[4801]: I1124 21:11:58.308354 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" event={"ID":"8e4df16e-186b-4b7b-9b54-c2f972b07b0f","Type":"ContainerStarted","Data":"f54067bfc48a5a375cf7ed2bfdba700acda8de9ff37287b517efa592acf0f8f1"} Nov 24 21:11:58 crc kubenswrapper[4801]: I1124 21:11:58.308396 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:11:58 crc kubenswrapper[4801]: I1124 21:11:58.310928 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" event={"ID":"11d04eba-db44-481f-9fcc-64d1add46ed6","Type":"ContainerStarted","Data":"c79a09c0d04ddf6f6541497b814385ca241b1b19cc8018fcbb2db7fbbd77c51a"} Nov 24 21:11:58 crc kubenswrapper[4801]: I1124 21:11:58.330580 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" podStartSLOduration=1.33055825 podStartE2EDuration="1.33055825s" podCreationTimestamp="2025-11-24 21:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:11:58.328769136 +0000 UTC m=+290.411355806" watchObservedRunningTime="2025-11-24 21:11:58.33055825 +0000 UTC m=+290.413144920" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.326464 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" event={"ID":"11d04eba-db44-481f-9fcc-64d1add46ed6","Type":"ContainerStarted","Data":"9a4cfb717123067e3a408695ab2eb8de8aac794f246e22b24cf20b3ebd077965"} Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.328407 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.340099 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.363065 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-55p8k" podStartSLOduration=1.8400033580000001 podStartE2EDuration="3.363042978s" podCreationTimestamp="2025-11-24 21:11:57 +0000 UTC" firstStartedPulling="2025-11-24 21:11:58.096077943 +0000 UTC m=+290.178664613" lastFinishedPulling="2025-11-24 21:11:59.619117553 +0000 UTC m=+291.701704233" observedRunningTime="2025-11-24 21:12:00.358109828 +0000 UTC m=+292.440696538" watchObservedRunningTime="2025-11-24 21:12:00.363042978 +0000 UTC m=+292.445629658" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.612810 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-h644g"] Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.614006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.616540 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.617281 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kw2qn" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.618900 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.618940 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.635976 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-h644g"] Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.663260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vmw\" (UniqueName: \"kubernetes.io/projected/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-kube-api-access-c7vmw\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.663325 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.663345 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.663443 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-metrics-client-ca\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.764552 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-metrics-client-ca\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.764670 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7vmw\" (UniqueName: \"kubernetes.io/projected/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-kube-api-access-c7vmw\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.764718 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.764743 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.766410 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-metrics-client-ca\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.771273 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.775181 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.792569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7vmw\" (UniqueName: \"kubernetes.io/projected/2cdb97f6-c39f-42ad-a7b5-84f9f26957fd-kube-api-access-c7vmw\") pod \"prometheus-operator-db54df47d-h644g\" (UID: \"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:00 crc kubenswrapper[4801]: I1124 21:12:00.931923 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" Nov 24 21:12:01 crc kubenswrapper[4801]: I1124 21:12:01.174675 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-h644g"] Nov 24 21:12:01 crc kubenswrapper[4801]: I1124 21:12:01.342969 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" event={"ID":"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd","Type":"ContainerStarted","Data":"b80e7e2ff697fdd54b0c2e233b9959481c1bbbf4cbdb8efa942761e9eb3b4ae8"} Nov 24 21:12:03 crc kubenswrapper[4801]: I1124 21:12:03.358208 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" event={"ID":"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd","Type":"ContainerStarted","Data":"b458b41ecd458cc7d1df7557f51356db35c9ba31b4972827544abddaa8eaf8c8"} Nov 24 21:12:03 crc kubenswrapper[4801]: I1124 21:12:03.358582 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" event={"ID":"2cdb97f6-c39f-42ad-a7b5-84f9f26957fd","Type":"ContainerStarted","Data":"75adc806555afb3665e767a445596fa524bfd6c6d6ee58d545937a70589a0dc5"} Nov 24 21:12:03 crc kubenswrapper[4801]: I1124 21:12:03.379259 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-h644g" podStartSLOduration=1.635379497 podStartE2EDuration="3.379236259s" podCreationTimestamp="2025-11-24 21:12:00 +0000 UTC" firstStartedPulling="2025-11-24 21:12:01.179651264 +0000 UTC m=+293.262237934" lastFinishedPulling="2025-11-24 21:12:02.923508026 +0000 UTC m=+295.006094696" observedRunningTime="2025-11-24 21:12:03.377133704 +0000 UTC m=+295.459720374" watchObservedRunningTime="2025-11-24 21:12:03.379236259 +0000 UTC m=+295.461822929" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.000840 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rckb5"] Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.002023 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.004024 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.005497 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dxbfk" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.006053 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.020689 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-n4wcq"] Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.021962 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.029181 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.029230 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.029752 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-sb2xm" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.039985 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.040064 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.040101 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhh5\" (UniqueName: \"kubernetes.io/projected/c508b3b4-dc05-42e7-8606-83cd1006c941-kube-api-access-szhh5\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.040181 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c508b3b4-dc05-42e7-8606-83cd1006c941-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.065308 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8"] Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.066420 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.070288 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-w4p8x" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.070700 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.070712 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.074431 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.074999 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rckb5"] Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.135849 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8"] Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.143177 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: E1124 21:12:05.143426 4801 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.143499 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-textfile\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.143556 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhh5\" (UniqueName: \"kubernetes.io/projected/c508b3b4-dc05-42e7-8606-83cd1006c941-kube-api-access-szhh5\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: E1124 21:12:05.143858 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-tls podName:c508b3b4-dc05-42e7-8606-83cd1006c941 nodeName:}" failed. No retries permitted until 2025-11-24 21:12:05.643816641 +0000 UTC m=+297.726403311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-rckb5" (UID: "c508b3b4-dc05-42e7-8606-83cd1006c941") : secret "openshift-state-metrics-tls" not found Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.143610 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144011 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-sys\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144041 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1c6a443a-050d-4db2-877a-cbcad6126be4-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144069 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-wtmp\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144416 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144467 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2w6d\" (UniqueName: \"kubernetes.io/projected/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-api-access-m2w6d\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144513 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27chb\" (UniqueName: \"kubernetes.io/projected/60e95df3-6fa6-4df2-a49b-77237a5d5245-kube-api-access-27chb\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60e95df3-6fa6-4df2-a49b-77237a5d5245-metrics-client-ca\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144834 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6a443a-050d-4db2-877a-cbcad6126be4-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.144887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c508b3b4-dc05-42e7-8606-83cd1006c941-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.145391 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-root\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.145495 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-tls\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.145592 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.145738 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.146003 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.146521 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c508b3b4-dc05-42e7-8606-83cd1006c941-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.166622 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhh5\" (UniqueName: \"kubernetes.io/projected/c508b3b4-dc05-42e7-8606-83cd1006c941-kube-api-access-szhh5\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.178863 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247517 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-sys\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247573 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1c6a443a-050d-4db2-877a-cbcad6126be4-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247601 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-wtmp\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247620 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247641 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2w6d\" (UniqueName: \"kubernetes.io/projected/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-api-access-m2w6d\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27chb\" (UniqueName: \"kubernetes.io/projected/60e95df3-6fa6-4df2-a49b-77237a5d5245-kube-api-access-27chb\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247685 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60e95df3-6fa6-4df2-a49b-77237a5d5245-metrics-client-ca\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247702 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6a443a-050d-4db2-877a-cbcad6126be4-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247725 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-root\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-tls\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247766 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247794 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247831 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-textfile\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.247857 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.249153 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60e95df3-6fa6-4df2-a49b-77237a5d5245-metrics-client-ca\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.249213 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-sys\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.249527 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1c6a443a-050d-4db2-877a-cbcad6126be4-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.249758 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-wtmp\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: E1124 21:12:05.250135 4801 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Nov 24 21:12:05 crc kubenswrapper[4801]: E1124 21:12:05.250187 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-tls podName:60e95df3-6fa6-4df2-a49b-77237a5d5245 nodeName:}" failed. No retries permitted until 2025-11-24 21:12:05.750172358 +0000 UTC m=+297.832759028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-tls") pod "node-exporter-n4wcq" (UID: "60e95df3-6fa6-4df2-a49b-77237a5d5245") : secret "node-exporter-tls" not found Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.251091 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-textfile\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.251178 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/60e95df3-6fa6-4df2-a49b-77237a5d5245-root\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.251333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.251469 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6a443a-050d-4db2-877a-cbcad6126be4-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.253277 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.253968 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.269408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27chb\" (UniqueName: \"kubernetes.io/projected/60e95df3-6fa6-4df2-a49b-77237a5d5245-kube-api-access-27chb\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.275262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.278535 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2w6d\" (UniqueName: \"kubernetes.io/projected/1c6a443a-050d-4db2-877a-cbcad6126be4-kube-api-access-m2w6d\") pod \"kube-state-metrics-777cb5bd5d-4ngt8\" (UID: \"1c6a443a-050d-4db2-877a-cbcad6126be4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.390615 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.614876 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8"] Nov 24 21:12:05 crc kubenswrapper[4801]: W1124 21:12:05.625351 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c6a443a_050d_4db2_877a_cbcad6126be4.slice/crio-a4841d2f2248232ac976626fbf562d0d367d163f75d72c5777cddea448d786ab WatchSource:0}: Error finding container a4841d2f2248232ac976626fbf562d0d367d163f75d72c5777cddea448d786ab: Status 404 returned error can't find the container with id a4841d2f2248232ac976626fbf562d0d367d163f75d72c5777cddea448d786ab Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.655714 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.662962 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c508b3b4-dc05-42e7-8606-83cd1006c941-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rckb5\" (UID: \"c508b3b4-dc05-42e7-8606-83cd1006c941\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.758821 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-tls\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.763795 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/60e95df3-6fa6-4df2-a49b-77237a5d5245-node-exporter-tls\") pod \"node-exporter-n4wcq\" (UID: \"60e95df3-6fa6-4df2-a49b-77237a5d5245\") " pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.919825 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" Nov 24 21:12:05 crc kubenswrapper[4801]: I1124 21:12:05.981881 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-n4wcq" Nov 24 21:12:06 crc kubenswrapper[4801]: W1124 21:12:06.013926 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60e95df3_6fa6_4df2_a49b_77237a5d5245.slice/crio-7f4406b252a6417eb49b3b91227fbad7863bf450d1b5cc32e5a149cadc86c14e WatchSource:0}: Error finding container 7f4406b252a6417eb49b3b91227fbad7863bf450d1b5cc32e5a149cadc86c14e: Status 404 returned error can't find the container with id 7f4406b252a6417eb49b3b91227fbad7863bf450d1b5cc32e5a149cadc86c14e Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.087136 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.089132 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.093725 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-tdjql" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.096211 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.096255 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.096391 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.096455 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.096500 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.096605 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.099958 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.102411 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.172016 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173395 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-web-config\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173433 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173461 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3fecb53b-4166-45d8-b0bd-542892875f3f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173480 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173507 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173530 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sdm\" (UniqueName: \"kubernetes.io/projected/3fecb53b-4166-45d8-b0bd-542892875f3f-kube-api-access-95sdm\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173693 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173788 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-config-volume\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173815 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fecb53b-4166-45d8-b0bd-542892875f3f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173852 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3fecb53b-4166-45d8-b0bd-542892875f3f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173935 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fecb53b-4166-45d8-b0bd-542892875f3f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.173983 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fecb53b-4166-45d8-b0bd-542892875f3f-config-out\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276086 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fecb53b-4166-45d8-b0bd-542892875f3f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276190 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3fecb53b-4166-45d8-b0bd-542892875f3f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276254 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fecb53b-4166-45d8-b0bd-542892875f3f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276290 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fecb53b-4166-45d8-b0bd-542892875f3f-config-out\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276312 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-web-config\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276761 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276884 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3fecb53b-4166-45d8-b0bd-542892875f3f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.276952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.277011 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.277059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95sdm\" (UniqueName: \"kubernetes.io/projected/3fecb53b-4166-45d8-b0bd-542892875f3f-kube-api-access-95sdm\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.277097 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.277132 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-config-volume\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.277623 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3fecb53b-4166-45d8-b0bd-542892875f3f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.278295 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3fecb53b-4166-45d8-b0bd-542892875f3f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.278589 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fecb53b-4166-45d8-b0bd-542892875f3f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.290770 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.292168 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fecb53b-4166-45d8-b0bd-542892875f3f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.292429 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fecb53b-4166-45d8-b0bd-542892875f3f-config-out\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.292585 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.292760 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.300131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.300227 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-config-volume\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.300394 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fecb53b-4166-45d8-b0bd-542892875f3f-web-config\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.303407 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95sdm\" (UniqueName: \"kubernetes.io/projected/3fecb53b-4166-45d8-b0bd-542892875f3f-kube-api-access-95sdm\") pod \"alertmanager-main-0\" (UID: \"3fecb53b-4166-45d8-b0bd-542892875f3f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.381646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" event={"ID":"1c6a443a-050d-4db2-877a-cbcad6126be4","Type":"ContainerStarted","Data":"a4841d2f2248232ac976626fbf562d0d367d163f75d72c5777cddea448d786ab"} Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.383353 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-n4wcq" event={"ID":"60e95df3-6fa6-4df2-a49b-77237a5d5245","Type":"ContainerStarted","Data":"7f4406b252a6417eb49b3b91227fbad7863bf450d1b5cc32e5a149cadc86c14e"} Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.436174 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.469085 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rckb5"] Nov 24 21:12:06 crc kubenswrapper[4801]: I1124 21:12:06.702117 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.071655 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-d9f46f46-9962c"] Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.073223 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.076281 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.076680 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-hpkfg" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.076694 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.076795 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.077002 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.077251 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.077265 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-3rnnvlbon0tq6" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.096032 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-d9f46f46-9962c"] Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.197909 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198018 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198066 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198143 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rmd\" (UniqueName: \"kubernetes.io/projected/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-kube-api-access-d9rmd\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198246 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-tls\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198301 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-metrics-client-ca\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.198396 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-grpc-tls\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: W1124 21:12:07.226487 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fecb53b_4166_45d8_b0bd_542892875f3f.slice/crio-dd96430ebe98cc40afa5d2c6d13547fca846431137517ca4c6ac71fccd3ed22d WatchSource:0}: Error finding container dd96430ebe98cc40afa5d2c6d13547fca846431137517ca4c6ac71fccd3ed22d: Status 404 returned error can't find the container with id dd96430ebe98cc40afa5d2c6d13547fca846431137517ca4c6ac71fccd3ed22d Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299641 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-tls\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299733 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-metrics-client-ca\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299762 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-grpc-tls\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299826 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299895 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299928 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299956 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.299992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9rmd\" (UniqueName: \"kubernetes.io/projected/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-kube-api-access-d9rmd\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.301694 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-metrics-client-ca\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.310428 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.311235 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.312127 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.314613 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-tls\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.315118 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.315538 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-secret-grpc-tls\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.318530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9rmd\" (UniqueName: \"kubernetes.io/projected/dd8b5b82-5d51-4f78-92c1-cc88e49670f5-kube-api-access-d9rmd\") pod \"thanos-querier-d9f46f46-9962c\" (UID: \"dd8b5b82-5d51-4f78-92c1-cc88e49670f5\") " pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.390078 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" event={"ID":"c508b3b4-dc05-42e7-8606-83cd1006c941","Type":"ContainerStarted","Data":"e1f6deb41ce01741d278c0ed6bc43faa42a6000713abaaf7d940dea76aade1b2"} Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.390738 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" event={"ID":"c508b3b4-dc05-42e7-8606-83cd1006c941","Type":"ContainerStarted","Data":"5fbedc1d6c1480cb10763a12ba6475e688eb60722cfeec5c0c52d70429ab66d3"} Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.390767 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" event={"ID":"c508b3b4-dc05-42e7-8606-83cd1006c941","Type":"ContainerStarted","Data":"91ae051711c42dde785c5c9310903d11bf4421fd8d694dded2840050f10e5c5a"} Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.391150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"dd96430ebe98cc40afa5d2c6d13547fca846431137517ca4c6ac71fccd3ed22d"} Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.393834 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:07 crc kubenswrapper[4801]: I1124 21:12:07.872283 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-d9f46f46-9962c"] Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.401923 4801 generic.go:334] "Generic (PLEG): container finished" podID="60e95df3-6fa6-4df2-a49b-77237a5d5245" containerID="2e19a1888a977b65864e56256e60060a89cac0aac1063480d56de9bc8aff1d85" exitCode=0 Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.402109 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-n4wcq" event={"ID":"60e95df3-6fa6-4df2-a49b-77237a5d5245","Type":"ContainerDied","Data":"2e19a1888a977b65864e56256e60060a89cac0aac1063480d56de9bc8aff1d85"} Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.406906 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"d07d7b5c382a3422a995c0e2d2c5008f4a51cdc391e34caccbaa18da048ffcd4"} Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.409119 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" event={"ID":"1c6a443a-050d-4db2-877a-cbcad6126be4","Type":"ContainerStarted","Data":"2ece9429e356bce070cac4037715ff9395e3a9c411e56ddbc5b6b840dcd00020"} Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.409150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" event={"ID":"1c6a443a-050d-4db2-877a-cbcad6126be4","Type":"ContainerStarted","Data":"b38534b667a746d3a9c9f65503883538049571ace21c289066fe94d2d3d58457"} Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.409170 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" event={"ID":"1c6a443a-050d-4db2-877a-cbcad6126be4","Type":"ContainerStarted","Data":"be1580d979038ddabe0b90d8bfa5bb47c3ecd60c4459bbbaec36728b0e5f1a95"} Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.420308 4801 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 24 21:12:08 crc kubenswrapper[4801]: I1124 21:12:08.450962 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4ngt8" podStartSLOduration=1.49829649 podStartE2EDuration="3.450932617s" podCreationTimestamp="2025-11-24 21:12:05 +0000 UTC" firstStartedPulling="2025-11-24 21:12:05.628117412 +0000 UTC m=+297.710704082" lastFinishedPulling="2025-11-24 21:12:07.580753499 +0000 UTC m=+299.663340209" observedRunningTime="2025-11-24 21:12:08.436934401 +0000 UTC m=+300.519521071" watchObservedRunningTime="2025-11-24 21:12:08.450932617 +0000 UTC m=+300.533519287" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.419431 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-n4wcq" event={"ID":"60e95df3-6fa6-4df2-a49b-77237a5d5245","Type":"ContainerStarted","Data":"4a7f043f798766389156cdcc313c5b68b84bd95196321d07a5937e833275578f"} Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.419901 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-n4wcq" event={"ID":"60e95df3-6fa6-4df2-a49b-77237a5d5245","Type":"ContainerStarted","Data":"dac4bf2ae2615b9a910e1ea0493f29af44291f0bf3509c94242d34d50c8643be"} Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.422257 4801 generic.go:334] "Generic (PLEG): container finished" podID="3fecb53b-4166-45d8-b0bd-542892875f3f" containerID="9fdd8c35bf3a3093e9ba54448d711d2d6fac27a45a47a0b6e4dfaa675d148936" exitCode=0 Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.422425 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerDied","Data":"9fdd8c35bf3a3093e9ba54448d711d2d6fac27a45a47a0b6e4dfaa675d148936"} Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.428993 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" event={"ID":"c508b3b4-dc05-42e7-8606-83cd1006c941","Type":"ContainerStarted","Data":"556c8786465774618f57a6ee747e3e6fad3296fb52bf35f129f71d1cafa75a62"} Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.443543 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-n4wcq" podStartSLOduration=3.87376276 podStartE2EDuration="5.443482281s" podCreationTimestamp="2025-11-24 21:12:04 +0000 UTC" firstStartedPulling="2025-11-24 21:12:06.019356482 +0000 UTC m=+298.101943152" lastFinishedPulling="2025-11-24 21:12:07.589076003 +0000 UTC m=+299.671662673" observedRunningTime="2025-11-24 21:12:09.43723501 +0000 UTC m=+301.519821680" watchObservedRunningTime="2025-11-24 21:12:09.443482281 +0000 UTC m=+301.526068971" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.459689 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rckb5" podStartSLOduration=3.175977719 podStartE2EDuration="5.459661583s" podCreationTimestamp="2025-11-24 21:12:04 +0000 UTC" firstStartedPulling="2025-11-24 21:12:06.735579023 +0000 UTC m=+298.818165693" lastFinishedPulling="2025-11-24 21:12:09.019262887 +0000 UTC m=+301.101849557" observedRunningTime="2025-11-24 21:12:09.455925519 +0000 UTC m=+301.538512189" watchObservedRunningTime="2025-11-24 21:12:09.459661583 +0000 UTC m=+301.542248253" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.783193 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b9c67dcbb-7ds4w"] Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.784680 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.797074 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b9c67dcbb-7ds4w"] Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.841749 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-oauth-config\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.841806 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-console-config\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.841847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-serving-cert\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.842022 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzwn\" (UniqueName: \"kubernetes.io/projected/84e99486-cf05-432c-b631-d069844c34bd-kube-api-access-vjzwn\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.842071 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-trusted-ca-bundle\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.842154 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-oauth-serving-cert\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.842213 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-service-ca\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944103 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-oauth-config\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944173 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-console-config\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-serving-cert\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944640 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzwn\" (UniqueName: \"kubernetes.io/projected/84e99486-cf05-432c-b631-d069844c34bd-kube-api-access-vjzwn\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944746 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-trusted-ca-bundle\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-oauth-serving-cert\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.944876 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-service-ca\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.946039 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-console-config\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.946070 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-service-ca\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.946938 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-oauth-serving-cert\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.947508 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-trusted-ca-bundle\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.952660 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-serving-cert\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.964218 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzwn\" (UniqueName: \"kubernetes.io/projected/84e99486-cf05-432c-b631-d069844c34bd-kube-api-access-vjzwn\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:09 crc kubenswrapper[4801]: I1124 21:12:09.965225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-oauth-config\") pod \"console-7b9c67dcbb-7ds4w\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.131143 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.305487 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7dd4cc789c-wm4gj"] Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.306771 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.309301 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jshwt" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.310476 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.311204 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.311384 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7ufrqo84iofmr" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.311537 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.314465 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.316495 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7dd4cc789c-wm4gj"] Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354424 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-secret-metrics-client-certs\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354572 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5493d1e1-679d-488b-89bf-555c3c5461c0-metrics-server-audit-profiles\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354619 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-client-ca-bundle\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw97k\" (UniqueName: \"kubernetes.io/projected/5493d1e1-679d-488b-89bf-555c3c5461c0-kube-api-access-kw97k\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354709 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-secret-metrics-server-tls\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354811 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5493d1e1-679d-488b-89bf-555c3c5461c0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.354871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5493d1e1-679d-488b-89bf-555c3c5461c0-audit-log\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457062 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5493d1e1-679d-488b-89bf-555c3c5461c0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457190 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5493d1e1-679d-488b-89bf-555c3c5461c0-audit-log\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457419 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-secret-metrics-client-certs\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457532 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5493d1e1-679d-488b-89bf-555c3c5461c0-metrics-server-audit-profiles\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457584 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-client-ca-bundle\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457766 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw97k\" (UniqueName: \"kubernetes.io/projected/5493d1e1-679d-488b-89bf-555c3c5461c0-kube-api-access-kw97k\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.457816 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-secret-metrics-server-tls\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.458605 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5493d1e1-679d-488b-89bf-555c3c5461c0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.458768 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5493d1e1-679d-488b-89bf-555c3c5461c0-audit-log\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.460742 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5493d1e1-679d-488b-89bf-555c3c5461c0-metrics-server-audit-profiles\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.467882 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-secret-metrics-server-tls\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.471525 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-secret-metrics-client-certs\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.472042 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5493d1e1-679d-488b-89bf-555c3c5461c0-client-ca-bundle\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.478207 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw97k\" (UniqueName: \"kubernetes.io/projected/5493d1e1-679d-488b-89bf-555c3c5461c0-kube-api-access-kw97k\") pod \"metrics-server-7dd4cc789c-wm4gj\" (UID: \"5493d1e1-679d-488b-89bf-555c3c5461c0\") " pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.637068 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.788164 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-fc679d55-s5b4b"] Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.789718 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.793670 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-fc679d55-s5b4b"] Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.795425 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.795882 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.866470 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a91716ae-0c98-4f30-bdeb-39188edd0f52-monitoring-plugin-cert\") pod \"monitoring-plugin-fc679d55-s5b4b\" (UID: \"a91716ae-0c98-4f30-bdeb-39188edd0f52\") " pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.968732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a91716ae-0c98-4f30-bdeb-39188edd0f52-monitoring-plugin-cert\") pod \"monitoring-plugin-fc679d55-s5b4b\" (UID: \"a91716ae-0c98-4f30-bdeb-39188edd0f52\") " pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:10 crc kubenswrapper[4801]: I1124 21:12:10.977450 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a91716ae-0c98-4f30-bdeb-39188edd0f52-monitoring-plugin-cert\") pod \"monitoring-plugin-fc679d55-s5b4b\" (UID: \"a91716ae-0c98-4f30-bdeb-39188edd0f52\") " pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.078422 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b9c67dcbb-7ds4w"] Nov 24 21:12:11 crc kubenswrapper[4801]: W1124 21:12:11.103648 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e99486_cf05_432c_b631_d069844c34bd.slice/crio-26b9459622ad3d17b2a859b13337e5433502de8b413f5b94e26a4a99bbc5870f WatchSource:0}: Error finding container 26b9459622ad3d17b2a859b13337e5433502de8b413f5b94e26a4a99bbc5870f: Status 404 returned error can't find the container with id 26b9459622ad3d17b2a859b13337e5433502de8b413f5b94e26a4a99bbc5870f Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.111263 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.197485 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7dd4cc789c-wm4gj"] Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.373032 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-fc679d55-s5b4b"] Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.411764 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.414097 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.420992 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.421093 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.421032 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-a507v4776g074" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.421979 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.422243 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.422406 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-sgh6m" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.422548 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.424000 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.424215 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.424551 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.424565 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.424658 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.428301 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.444187 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" event={"ID":"a91716ae-0c98-4f30-bdeb-39188edd0f52","Type":"ContainerStarted","Data":"f74c3297f8fb37379778316e70e95ebce471865bf5fb534b939f6621b90d40d5"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.447863 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.449511 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" event={"ID":"5493d1e1-679d-488b-89bf-555c3c5461c0","Type":"ContainerStarted","Data":"16bd32d1083ad628f8e600d164de7cce07f6bf172da1bd2fd875c0182927a9d3"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.454792 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9c67dcbb-7ds4w" event={"ID":"84e99486-cf05-432c-b631-d069844c34bd","Type":"ContainerStarted","Data":"21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.454866 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9c67dcbb-7ds4w" event={"ID":"84e99486-cf05-432c-b631-d069844c34bd","Type":"ContainerStarted","Data":"26b9459622ad3d17b2a859b13337e5433502de8b413f5b94e26a4a99bbc5870f"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.458574 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"d19c500efc6fef395993ba08c6b2e778e95840a48f648cffa1a5db66349aa603"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.458623 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"434e81451840404f258fc3d59345ef624163369dbdd72a6abcfb722965158be6"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.458634 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"6c1c279074af20e8cb79b26a7dee79ef4b24a1739b95e339485c4b49696f39bc"} Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477339 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477429 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477448 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/258aed89-5c9d-43e0-9f06-73795b8f8407-config-out\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477483 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477596 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/258aed89-5c9d-43e0-9f06-73795b8f8407-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477696 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477744 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477777 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477815 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477831 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xkr\" (UniqueName: \"kubernetes.io/projected/258aed89-5c9d-43e0-9f06-73795b8f8407-kube-api-access-q8xkr\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477852 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477896 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-web-config\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477916 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477957 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-config\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477975 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.477992 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.478012 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.496636 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b9c67dcbb-7ds4w" podStartSLOduration=2.496612565 podStartE2EDuration="2.496612565s" podCreationTimestamp="2025-11-24 21:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:12:11.496004437 +0000 UTC m=+303.578591097" watchObservedRunningTime="2025-11-24 21:12:11.496612565 +0000 UTC m=+303.579199235" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.579960 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580036 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8xkr\" (UniqueName: \"kubernetes.io/projected/258aed89-5c9d-43e0-9f06-73795b8f8407-kube-api-access-q8xkr\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580076 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580109 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-web-config\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580150 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580193 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-config\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580308 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580330 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580385 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580403 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580453 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580470 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/258aed89-5c9d-43e0-9f06-73795b8f8407-config-out\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580501 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580519 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/258aed89-5c9d-43e0-9f06-73795b8f8407-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580565 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.580588 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.581567 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.583916 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.592561 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.593343 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/258aed89-5c9d-43e0-9f06-73795b8f8407-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.593860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.593863 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.594093 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/258aed89-5c9d-43e0-9f06-73795b8f8407-config-out\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.597416 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.599166 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.599397 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.599726 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.600258 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.600591 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.600715 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8xkr\" (UniqueName: \"kubernetes.io/projected/258aed89-5c9d-43e0-9f06-73795b8f8407-kube-api-access-q8xkr\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.603614 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-web-config\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.604893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/258aed89-5c9d-43e0-9f06-73795b8f8407-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.607493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-config\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.615339 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/258aed89-5c9d-43e0-9f06-73795b8f8407-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"258aed89-5c9d-43e0-9f06-73795b8f8407\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:11 crc kubenswrapper[4801]: I1124 21:12:11.737775 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:14 crc kubenswrapper[4801]: I1124 21:12:14.789099 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 21:12:15 crc kubenswrapper[4801]: W1124 21:12:15.039118 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod258aed89_5c9d_43e0_9f06_73795b8f8407.slice/crio-c60a6b0940b775fd797c4b50dcc02d967b31d00f35cca4d6afe53f0c0571f7e9 WatchSource:0}: Error finding container c60a6b0940b775fd797c4b50dcc02d967b31d00f35cca4d6afe53f0c0571f7e9: Status 404 returned error can't find the container with id c60a6b0940b775fd797c4b50dcc02d967b31d00f35cca4d6afe53f0c0571f7e9 Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.493519 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"043d4e7a0dfeecc33193b0995902d6b3c4d4c25cae9e3754a74c8c9808963418"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.493973 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"18c81f8e0c4ae07e9b710103afe1fa9761cfb61ccff7281d848d296cb7d94a8c"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.495423 4801 generic.go:334] "Generic (PLEG): container finished" podID="258aed89-5c9d-43e0-9f06-73795b8f8407" containerID="80b3181957768df09496dd3346d939b965bce5c316db72af296da0f7a8fc6a7e" exitCode=0 Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.495488 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerDied","Data":"80b3181957768df09496dd3346d939b965bce5c316db72af296da0f7a8fc6a7e"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.495507 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"c60a6b0940b775fd797c4b50dcc02d967b31d00f35cca4d6afe53f0c0571f7e9"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.498028 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"a09de0eb4aa02a1ec1f77081b4ccc517e020e7d09e5b91c6db686045229006d8"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.498068 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"15d973652735e3292342ac17c30c7f9206861b37a955af9922bb2847a67b95d2"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.500000 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" event={"ID":"a91716ae-0c98-4f30-bdeb-39188edd0f52","Type":"ContainerStarted","Data":"1bbc5f53dd44ce2dbcdf811b93af9e3b095b5f8f96c71d2e2e6c37fed48adf7e"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.501042 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.506258 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" event={"ID":"5493d1e1-679d-488b-89bf-555c3c5461c0","Type":"ContainerStarted","Data":"e08ea63b565892042b6180a4bfc7ba4d714197cbc66d3f6a2781b31f59dd8c6e"} Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.509168 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" Nov 24 21:12:15 crc kubenswrapper[4801]: I1124 21:12:15.564439 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" podStartSLOduration=1.737993963 podStartE2EDuration="5.564416228s" podCreationTimestamp="2025-11-24 21:12:10 +0000 UTC" firstStartedPulling="2025-11-24 21:12:11.209127235 +0000 UTC m=+303.291713905" lastFinishedPulling="2025-11-24 21:12:15.03554949 +0000 UTC m=+307.118136170" observedRunningTime="2025-11-24 21:12:15.559568071 +0000 UTC m=+307.642154741" watchObservedRunningTime="2025-11-24 21:12:15.564416228 +0000 UTC m=+307.647002898" Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.531325 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"a354e10a3d957a2d3e25c14e9e207077502b11a458055efe9191f7bb25be9521"} Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.531670 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"32bac9e873e079bb0171acd5547a3f4a6e8f3cd7e2e12a455492600c03f1c00a"} Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.531686 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"7f8a0e8c218d79b047e3e2150461d4d50297f34684f65859108293a2e3cdcd56"} Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.531695 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3fecb53b-4166-45d8-b0bd-542892875f3f","Type":"ContainerStarted","Data":"075d889dfacf227f9faaaf9695444ebc29960f2ee1b5835704ed3d2dc1e67085"} Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.537967 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" event={"ID":"dd8b5b82-5d51-4f78-92c1-cc88e49670f5","Type":"ContainerStarted","Data":"c2e3d4144d7ebb8e2816dfd750537dbdec4279805f803fb501b7deb204bf384a"} Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.594530 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-fc679d55-s5b4b" podStartSLOduration=2.8775796700000003 podStartE2EDuration="6.594502263s" podCreationTimestamp="2025-11-24 21:12:10 +0000 UTC" firstStartedPulling="2025-11-24 21:12:11.389740522 +0000 UTC m=+303.472327192" lastFinishedPulling="2025-11-24 21:12:15.106663105 +0000 UTC m=+307.189249785" observedRunningTime="2025-11-24 21:12:15.582567891 +0000 UTC m=+307.665154571" watchObservedRunningTime="2025-11-24 21:12:16.594502263 +0000 UTC m=+308.677088973" Nov 24 21:12:16 crc kubenswrapper[4801]: I1124 21:12:16.594902 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.797159125 podStartE2EDuration="10.594892705s" podCreationTimestamp="2025-11-24 21:12:06 +0000 UTC" firstStartedPulling="2025-11-24 21:12:07.230216679 +0000 UTC m=+299.312803349" lastFinishedPulling="2025-11-24 21:12:15.027950259 +0000 UTC m=+307.110536929" observedRunningTime="2025-11-24 21:12:16.587845401 +0000 UTC m=+308.670432071" watchObservedRunningTime="2025-11-24 21:12:16.594892705 +0000 UTC m=+308.677479405" Nov 24 21:12:17 crc kubenswrapper[4801]: I1124 21:12:17.395119 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:17 crc kubenswrapper[4801]: I1124 21:12:17.407713 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" Nov 24 21:12:17 crc kubenswrapper[4801]: I1124 21:12:17.449481 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-d9f46f46-9962c" podStartSLOduration=3.311658566 podStartE2EDuration="10.449454297s" podCreationTimestamp="2025-11-24 21:12:07 +0000 UTC" firstStartedPulling="2025-11-24 21:12:07.890046714 +0000 UTC m=+299.972633384" lastFinishedPulling="2025-11-24 21:12:15.027842435 +0000 UTC m=+307.110429115" observedRunningTime="2025-11-24 21:12:16.64073363 +0000 UTC m=+308.723320310" watchObservedRunningTime="2025-11-24 21:12:17.449454297 +0000 UTC m=+309.532040967" Nov 24 21:12:17 crc kubenswrapper[4801]: I1124 21:12:17.749695 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k5bsn" Nov 24 21:12:17 crc kubenswrapper[4801]: I1124 21:12:17.815656 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-62t9t"] Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.131400 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.132054 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.151516 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.579767 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"442d7ee0e6e6d1e0242d675dfebf21e7a521e90f5be7cd8f17ac0c979a808a9e"} Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.579815 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"e644b9ab57d1e529c3d57c3d1b4d406e5f598f9bfd30eab1a25d92f199529514"} Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.579830 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"6a290be18473812546cefec7ccaa3ed1cefbb83faa7c559da4974c4ab5ad3e17"} Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.579841 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"05113ed9726b51afc3011c07a7c5a479b9a8e02d9f52c0290df9b9127a6de1b3"} Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.579850 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"dbac3cdb677e1d2654d8c447fac32a49cbb34def19cba701860a5f0fbecda7af"} Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.579858 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"258aed89-5c9d-43e0-9f06-73795b8f8407","Type":"ContainerStarted","Data":"6a1e35852a4f01b77aca631c7f07cd8cec0778d75c2a10dcaea9438e4ef534a7"} Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.584717 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.612663 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.525984426 podStartE2EDuration="9.612638822s" podCreationTimestamp="2025-11-24 21:12:11 +0000 UTC" firstStartedPulling="2025-11-24 21:12:15.497025476 +0000 UTC m=+307.579612166" lastFinishedPulling="2025-11-24 21:12:19.583679892 +0000 UTC m=+311.666266562" observedRunningTime="2025-11-24 21:12:20.609150015 +0000 UTC m=+312.691736695" watchObservedRunningTime="2025-11-24 21:12:20.612638822 +0000 UTC m=+312.695225482" Nov 24 21:12:20 crc kubenswrapper[4801]: I1124 21:12:20.680995 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-khn9r"] Nov 24 21:12:21 crc kubenswrapper[4801]: I1124 21:12:21.738840 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:12:30 crc kubenswrapper[4801]: I1124 21:12:30.637439 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:30 crc kubenswrapper[4801]: I1124 21:12:30.638237 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:42 crc kubenswrapper[4801]: I1124 21:12:42.873669 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" podUID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" containerName="registry" containerID="cri-o://487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55" gracePeriod=30 Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.010176 4801 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-62t9t container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.33:5000/healthz\": dial tcp 10.217.0.33:5000: connect: connection refused" start-of-body= Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.010255 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" podUID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.33:5000/healthz\": dial tcp 10.217.0.33:5000: connect: connection refused" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.267642 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.350354 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cszwp\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-kube-api-access-cszwp\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.350900 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.350979 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-ca-trust-extracted\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.351056 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-trusted-ca\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.351111 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-certificates\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.351191 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-bound-sa-token\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.351248 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-tls\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.351331 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-installation-pull-secrets\") pod \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\" (UID: \"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78\") " Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.352530 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.352570 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.359287 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.359540 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-kube-api-access-cszwp" (OuterVolumeSpecName: "kube-api-access-cszwp") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "kube-api-access-cszwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.360032 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.360911 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.363935 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.377489 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" (UID: "7028d5cc-2df1-41a1-b610-a6b8e9b7bf78"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454037 4801 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454088 4801 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454107 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cszwp\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-kube-api-access-cszwp\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454127 4801 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454143 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454156 4801 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.454169 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.756868 4801 generic.go:334] "Generic (PLEG): container finished" podID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" containerID="487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55" exitCode=0 Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.756931 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" event={"ID":"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78","Type":"ContainerDied","Data":"487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55"} Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.756975 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" event={"ID":"7028d5cc-2df1-41a1-b610-a6b8e9b7bf78","Type":"ContainerDied","Data":"aff25d1499b0ea88fe1b7d730f0417805dc567de7c18caf41f23c09b502428ad"} Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.757001 4801 scope.go:117] "RemoveContainer" containerID="487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.757696 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-62t9t" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.789516 4801 scope.go:117] "RemoveContainer" containerID="487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55" Nov 24 21:12:43 crc kubenswrapper[4801]: E1124 21:12:43.791316 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55\": container with ID starting with 487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55 not found: ID does not exist" containerID="487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.791420 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55"} err="failed to get container status \"487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55\": rpc error: code = NotFound desc = could not find container \"487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55\": container with ID starting with 487f8c4d7355c59d3133f48d84da4e13435affe04f200f7b27531fbd0cbf3b55 not found: ID does not exist" Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.808275 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-62t9t"] Nov 24 21:12:43 crc kubenswrapper[4801]: I1124 21:12:43.814156 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-62t9t"] Nov 24 21:12:44 crc kubenswrapper[4801]: I1124 21:12:44.678653 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" path="/var/lib/kubelet/pods/7028d5cc-2df1-41a1-b610-a6b8e9b7bf78/volumes" Nov 24 21:12:45 crc kubenswrapper[4801]: I1124 21:12:45.736499 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-khn9r" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" containerName="console" containerID="cri-o://86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8" gracePeriod=15 Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.174904 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-khn9r_6994c699-1333-48ce-a5cc-62ce628e3497/console/0.log" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.175022 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318119 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-oauth-serving-cert\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318199 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm6mw\" (UniqueName: \"kubernetes.io/projected/6994c699-1333-48ce-a5cc-62ce628e3497-kube-api-access-lm6mw\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318247 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-oauth-config\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318295 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-trusted-ca-bundle\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318414 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-serving-cert\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318472 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-service-ca\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.318551 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-console-config\") pod \"6994c699-1333-48ce-a5cc-62ce628e3497\" (UID: \"6994c699-1333-48ce-a5cc-62ce628e3497\") " Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.320167 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-console-config" (OuterVolumeSpecName: "console-config") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.321589 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-service-ca" (OuterVolumeSpecName: "service-ca") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.321736 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.322714 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.327166 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6994c699-1333-48ce-a5cc-62ce628e3497-kube-api-access-lm6mw" (OuterVolumeSpecName: "kube-api-access-lm6mw") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "kube-api-access-lm6mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.327435 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.330546 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6994c699-1333-48ce-a5cc-62ce628e3497" (UID: "6994c699-1333-48ce-a5cc-62ce628e3497"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420484 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420883 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm6mw\" (UniqueName: \"kubernetes.io/projected/6994c699-1333-48ce-a5cc-62ce628e3497-kube-api-access-lm6mw\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420897 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420907 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420917 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6994c699-1333-48ce-a5cc-62ce628e3497-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420927 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.420935 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6994c699-1333-48ce-a5cc-62ce628e3497-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.790236 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-khn9r_6994c699-1333-48ce-a5cc-62ce628e3497/console/0.log" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.790309 4801 generic.go:334] "Generic (PLEG): container finished" podID="6994c699-1333-48ce-a5cc-62ce628e3497" containerID="86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8" exitCode=2 Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.790352 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khn9r" event={"ID":"6994c699-1333-48ce-a5cc-62ce628e3497","Type":"ContainerDied","Data":"86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8"} Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.790460 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khn9r" event={"ID":"6994c699-1333-48ce-a5cc-62ce628e3497","Type":"ContainerDied","Data":"1398945dcc809207baf8aec6721665e1733d3d4518ce1a06b0dc14f5fc014aa5"} Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.790519 4801 scope.go:117] "RemoveContainer" containerID="86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.790555 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khn9r" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.816042 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-khn9r"] Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.820028 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-khn9r"] Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.824169 4801 scope.go:117] "RemoveContainer" containerID="86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8" Nov 24 21:12:46 crc kubenswrapper[4801]: E1124 21:12:46.824869 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8\": container with ID starting with 86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8 not found: ID does not exist" containerID="86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8" Nov 24 21:12:46 crc kubenswrapper[4801]: I1124 21:12:46.824903 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8"} err="failed to get container status \"86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8\": rpc error: code = NotFound desc = could not find container \"86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8\": container with ID starting with 86398130f2d96929dd8b244766f6b40ad13a3fc01e31b1e723ccd9cc58eda9a8 not found: ID does not exist" Nov 24 21:12:48 crc kubenswrapper[4801]: I1124 21:12:48.674113 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" path="/var/lib/kubelet/pods/6994c699-1333-48ce-a5cc-62ce628e3497/volumes" Nov 24 21:12:50 crc kubenswrapper[4801]: I1124 21:12:50.647813 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:50 crc kubenswrapper[4801]: I1124 21:12:50.655519 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7dd4cc789c-wm4gj" Nov 24 21:12:54 crc kubenswrapper[4801]: I1124 21:12:54.320667 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:12:54 crc kubenswrapper[4801]: I1124 21:12:54.321445 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:13:11 crc kubenswrapper[4801]: I1124 21:13:11.739143 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:13:11 crc kubenswrapper[4801]: I1124 21:13:11.783618 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:13:12 crc kubenswrapper[4801]: I1124 21:13:12.596073 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.843978 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b8bf4964f-q79rb"] Nov 24 21:13:22 crc kubenswrapper[4801]: E1124 21:13:22.844841 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" containerName="console" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.844856 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" containerName="console" Nov 24 21:13:22 crc kubenswrapper[4801]: E1124 21:13:22.844883 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" containerName="registry" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.844889 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" containerName="registry" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.844993 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="7028d5cc-2df1-41a1-b610-a6b8e9b7bf78" containerName="registry" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.845002 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6994c699-1333-48ce-a5cc-62ce628e3497" containerName="console" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.845489 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.868878 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b8bf4964f-q79rb"] Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911502 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-serving-cert\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-oauth-serving-cert\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911580 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-trusted-ca-bundle\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911609 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-service-ca\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911642 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-oauth-config\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911666 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-config\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:22 crc kubenswrapper[4801]: I1124 21:13:22.911690 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdgk\" (UniqueName: \"kubernetes.io/projected/74e357cb-afbe-47dd-96a2-9e4e703481cd-kube-api-access-6zdgk\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.012812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-serving-cert\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.012870 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-oauth-serving-cert\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.012891 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-trusted-ca-bundle\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.012938 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-service-ca\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.012978 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-oauth-config\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.013006 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-config\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.013037 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdgk\" (UniqueName: \"kubernetes.io/projected/74e357cb-afbe-47dd-96a2-9e4e703481cd-kube-api-access-6zdgk\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.014106 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-oauth-serving-cert\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.014187 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-trusted-ca-bundle\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.014242 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-service-ca\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.014757 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-config\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.018772 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-oauth-config\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.019021 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-serving-cert\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.028277 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdgk\" (UniqueName: \"kubernetes.io/projected/74e357cb-afbe-47dd-96a2-9e4e703481cd-kube-api-access-6zdgk\") pod \"console-6b8bf4964f-q79rb\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.167173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.389003 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b8bf4964f-q79rb"] Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.648489 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b8bf4964f-q79rb" event={"ID":"74e357cb-afbe-47dd-96a2-9e4e703481cd","Type":"ContainerStarted","Data":"fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299"} Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.648543 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b8bf4964f-q79rb" event={"ID":"74e357cb-afbe-47dd-96a2-9e4e703481cd","Type":"ContainerStarted","Data":"47d1068609f469e4942c1fec3512846cbd5a5d65f477e1f02f80c4898f14913a"} Nov 24 21:13:23 crc kubenswrapper[4801]: I1124 21:13:23.672050 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b8bf4964f-q79rb" podStartSLOduration=1.672031697 podStartE2EDuration="1.672031697s" podCreationTimestamp="2025-11-24 21:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:13:23.668429158 +0000 UTC m=+375.751015848" watchObservedRunningTime="2025-11-24 21:13:23.672031697 +0000 UTC m=+375.754618357" Nov 24 21:13:24 crc kubenswrapper[4801]: I1124 21:13:24.320595 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:13:24 crc kubenswrapper[4801]: I1124 21:13:24.320950 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:13:33 crc kubenswrapper[4801]: I1124 21:13:33.168447 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:33 crc kubenswrapper[4801]: I1124 21:13:33.170724 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:33 crc kubenswrapper[4801]: I1124 21:13:33.178026 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:33 crc kubenswrapper[4801]: I1124 21:13:33.731480 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:13:33 crc kubenswrapper[4801]: I1124 21:13:33.798327 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b9c67dcbb-7ds4w"] Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.319805 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.320453 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.320514 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.321233 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af707f1810af6f57000f5ab08c007f351bf778badb03b5db24d60d19835c23bf"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.321326 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://af707f1810af6f57000f5ab08c007f351bf778badb03b5db24d60d19835c23bf" gracePeriod=600 Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.878713 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="af707f1810af6f57000f5ab08c007f351bf778badb03b5db24d60d19835c23bf" exitCode=0 Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.878798 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"af707f1810af6f57000f5ab08c007f351bf778badb03b5db24d60d19835c23bf"} Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.879190 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"4a2c1c8c89f37badccd577363e8b4343fdabe58545188aa55faababfb8eeb353"} Nov 24 21:13:54 crc kubenswrapper[4801]: I1124 21:13:54.879226 4801 scope.go:117] "RemoveContainer" containerID="581f88f35bda3d892b999e581943494965a2fc812aaa5a3b894991da87e81065" Nov 24 21:13:58 crc kubenswrapper[4801]: I1124 21:13:58.872613 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b9c67dcbb-7ds4w" podUID="84e99486-cf05-432c-b631-d069844c34bd" containerName="console" containerID="cri-o://21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9" gracePeriod=15 Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.270411 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b9c67dcbb-7ds4w_84e99486-cf05-432c-b631-d069844c34bd/console/0.log" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.270818 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.382861 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-oauth-serving-cert\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.382917 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-service-ca\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.382982 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-console-config\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.383056 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-trusted-ca-bundle\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.383077 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-serving-cert\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.383104 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjzwn\" (UniqueName: \"kubernetes.io/projected/84e99486-cf05-432c-b631-d069844c34bd-kube-api-access-vjzwn\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.383148 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-oauth-config\") pod \"84e99486-cf05-432c-b631-d069844c34bd\" (UID: \"84e99486-cf05-432c-b631-d069844c34bd\") " Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.384334 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.384475 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-console-config" (OuterVolumeSpecName: "console-config") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.384498 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-service-ca" (OuterVolumeSpecName: "service-ca") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.384954 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.391085 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.397230 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.397634 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e99486-cf05-432c-b631-d069844c34bd-kube-api-access-vjzwn" (OuterVolumeSpecName: "kube-api-access-vjzwn") pod "84e99486-cf05-432c-b631-d069844c34bd" (UID: "84e99486-cf05-432c-b631-d069844c34bd"). InnerVolumeSpecName "kube-api-access-vjzwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485344 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485425 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485440 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485457 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjzwn\" (UniqueName: \"kubernetes.io/projected/84e99486-cf05-432c-b631-d069844c34bd-kube-api-access-vjzwn\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485473 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e99486-cf05-432c-b631-d069844c34bd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485484 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.485496 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e99486-cf05-432c-b631-d069844c34bd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.921883 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b9c67dcbb-7ds4w_84e99486-cf05-432c-b631-d069844c34bd/console/0.log" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.922011 4801 generic.go:334] "Generic (PLEG): container finished" podID="84e99486-cf05-432c-b631-d069844c34bd" containerID="21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9" exitCode=2 Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.922088 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9c67dcbb-7ds4w" event={"ID":"84e99486-cf05-432c-b631-d069844c34bd","Type":"ContainerDied","Data":"21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9"} Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.922174 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9c67dcbb-7ds4w" event={"ID":"84e99486-cf05-432c-b631-d069844c34bd","Type":"ContainerDied","Data":"26b9459622ad3d17b2a859b13337e5433502de8b413f5b94e26a4a99bbc5870f"} Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.922222 4801 scope.go:117] "RemoveContainer" containerID="21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.922525 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9c67dcbb-7ds4w" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.946929 4801 scope.go:117] "RemoveContainer" containerID="21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9" Nov 24 21:13:59 crc kubenswrapper[4801]: E1124 21:13:59.948477 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9\": container with ID starting with 21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9 not found: ID does not exist" containerID="21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.948556 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9"} err="failed to get container status \"21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9\": rpc error: code = NotFound desc = could not find container \"21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9\": container with ID starting with 21a4b782a10c7ea149be9b6b7079b79625ad1f6618ce7ada367904a7c2f6a7a9 not found: ID does not exist" Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.974921 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b9c67dcbb-7ds4w"] Nov 24 21:13:59 crc kubenswrapper[4801]: I1124 21:13:59.982077 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b9c67dcbb-7ds4w"] Nov 24 21:14:00 crc kubenswrapper[4801]: I1124 21:14:00.674771 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e99486-cf05-432c-b631-d069844c34bd" path="/var/lib/kubelet/pods/84e99486-cf05-432c-b631-d069844c34bd/volumes" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.155343 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws"] Nov 24 21:15:00 crc kubenswrapper[4801]: E1124 21:15:00.158437 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e99486-cf05-432c-b631-d069844c34bd" containerName="console" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.158546 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e99486-cf05-432c-b631-d069844c34bd" containerName="console" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.159480 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e99486-cf05-432c-b631-d069844c34bd" containerName="console" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.160285 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.164355 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.167199 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.168830 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws"] Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.187796 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65qk\" (UniqueName: \"kubernetes.io/projected/915b6a61-708c-481b-9fd3-42124c0449bd-kube-api-access-x65qk\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.187872 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915b6a61-708c-481b-9fd3-42124c0449bd-config-volume\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.188068 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915b6a61-708c-481b-9fd3-42124c0449bd-secret-volume\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.289480 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65qk\" (UniqueName: \"kubernetes.io/projected/915b6a61-708c-481b-9fd3-42124c0449bd-kube-api-access-x65qk\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.289565 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915b6a61-708c-481b-9fd3-42124c0449bd-config-volume\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.289599 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915b6a61-708c-481b-9fd3-42124c0449bd-secret-volume\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.291126 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915b6a61-708c-481b-9fd3-42124c0449bd-config-volume\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.297262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915b6a61-708c-481b-9fd3-42124c0449bd-secret-volume\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.323305 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65qk\" (UniqueName: \"kubernetes.io/projected/915b6a61-708c-481b-9fd3-42124c0449bd-kube-api-access-x65qk\") pod \"collect-profiles-29400315-qvhws\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.487558 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:00 crc kubenswrapper[4801]: I1124 21:15:00.953543 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws"] Nov 24 21:15:01 crc kubenswrapper[4801]: I1124 21:15:01.456364 4801 generic.go:334] "Generic (PLEG): container finished" podID="915b6a61-708c-481b-9fd3-42124c0449bd" containerID="ad24324ea40a99984b9807a7f27c2e361279e2e669f5065e3f58b90353659e61" exitCode=0 Nov 24 21:15:01 crc kubenswrapper[4801]: I1124 21:15:01.456433 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" event={"ID":"915b6a61-708c-481b-9fd3-42124c0449bd","Type":"ContainerDied","Data":"ad24324ea40a99984b9807a7f27c2e361279e2e669f5065e3f58b90353659e61"} Nov 24 21:15:01 crc kubenswrapper[4801]: I1124 21:15:01.456752 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" event={"ID":"915b6a61-708c-481b-9fd3-42124c0449bd","Type":"ContainerStarted","Data":"e1eb2a9a7d99eb18c2562e6b877f3899038b6cdcac07e4a8c69e6eed6837eac3"} Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.699712 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.724546 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915b6a61-708c-481b-9fd3-42124c0449bd-config-volume\") pod \"915b6a61-708c-481b-9fd3-42124c0449bd\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.724664 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915b6a61-708c-481b-9fd3-42124c0449bd-secret-volume\") pod \"915b6a61-708c-481b-9fd3-42124c0449bd\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.724706 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65qk\" (UniqueName: \"kubernetes.io/projected/915b6a61-708c-481b-9fd3-42124c0449bd-kube-api-access-x65qk\") pod \"915b6a61-708c-481b-9fd3-42124c0449bd\" (UID: \"915b6a61-708c-481b-9fd3-42124c0449bd\") " Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.726683 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/915b6a61-708c-481b-9fd3-42124c0449bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "915b6a61-708c-481b-9fd3-42124c0449bd" (UID: "915b6a61-708c-481b-9fd3-42124c0449bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.746098 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915b6a61-708c-481b-9fd3-42124c0449bd-kube-api-access-x65qk" (OuterVolumeSpecName: "kube-api-access-x65qk") pod "915b6a61-708c-481b-9fd3-42124c0449bd" (UID: "915b6a61-708c-481b-9fd3-42124c0449bd"). InnerVolumeSpecName "kube-api-access-x65qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.749658 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915b6a61-708c-481b-9fd3-42124c0449bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "915b6a61-708c-481b-9fd3-42124c0449bd" (UID: "915b6a61-708c-481b-9fd3-42124c0449bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.826301 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915b6a61-708c-481b-9fd3-42124c0449bd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.826807 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x65qk\" (UniqueName: \"kubernetes.io/projected/915b6a61-708c-481b-9fd3-42124c0449bd-kube-api-access-x65qk\") on node \"crc\" DevicePath \"\"" Nov 24 21:15:02 crc kubenswrapper[4801]: I1124 21:15:02.826819 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915b6a61-708c-481b-9fd3-42124c0449bd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:15:03 crc kubenswrapper[4801]: I1124 21:15:03.474204 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" Nov 24 21:15:03 crc kubenswrapper[4801]: I1124 21:15:03.473980 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws" event={"ID":"915b6a61-708c-481b-9fd3-42124c0449bd","Type":"ContainerDied","Data":"e1eb2a9a7d99eb18c2562e6b877f3899038b6cdcac07e4a8c69e6eed6837eac3"} Nov 24 21:15:03 crc kubenswrapper[4801]: I1124 21:15:03.474391 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1eb2a9a7d99eb18c2562e6b877f3899038b6cdcac07e4a8c69e6eed6837eac3" Nov 24 21:15:54 crc kubenswrapper[4801]: I1124 21:15:54.320421 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:15:54 crc kubenswrapper[4801]: I1124 21:15:54.321044 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:16:24 crc kubenswrapper[4801]: I1124 21:16:24.320842 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:16:24 crc kubenswrapper[4801]: I1124 21:16:24.321972 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.697678 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5"] Nov 24 21:16:48 crc kubenswrapper[4801]: E1124 21:16:48.699195 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915b6a61-708c-481b-9fd3-42124c0449bd" containerName="collect-profiles" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.699232 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="915b6a61-708c-481b-9fd3-42124c0449bd" containerName="collect-profiles" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.699579 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="915b6a61-708c-481b-9fd3-42124c0449bd" containerName="collect-profiles" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.702794 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.706169 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5"] Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.707278 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.851681 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.851771 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.852008 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2cwm\" (UniqueName: \"kubernetes.io/projected/84881a52-e1ca-4125-b8ae-2aceed531e1b-kube-api-access-k2cwm\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.953896 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2cwm\" (UniqueName: \"kubernetes.io/projected/84881a52-e1ca-4125-b8ae-2aceed531e1b-kube-api-access-k2cwm\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.953992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.954059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.955038 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.955112 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:48 crc kubenswrapper[4801]: I1124 21:16:48.978327 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2cwm\" (UniqueName: \"kubernetes.io/projected/84881a52-e1ca-4125-b8ae-2aceed531e1b-kube-api-access-k2cwm\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:49 crc kubenswrapper[4801]: I1124 21:16:49.073066 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:49 crc kubenswrapper[4801]: I1124 21:16:49.345760 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5"] Nov 24 21:16:50 crc kubenswrapper[4801]: I1124 21:16:50.267612 4801 generic.go:334] "Generic (PLEG): container finished" podID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerID="dc1bbf6056ed588795aeb69ca55dbfb3d34a6bf64992c7cf277c4128ecc75358" exitCode=0 Nov 24 21:16:50 crc kubenswrapper[4801]: I1124 21:16:50.267725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" event={"ID":"84881a52-e1ca-4125-b8ae-2aceed531e1b","Type":"ContainerDied","Data":"dc1bbf6056ed588795aeb69ca55dbfb3d34a6bf64992c7cf277c4128ecc75358"} Nov 24 21:16:50 crc kubenswrapper[4801]: I1124 21:16:50.268080 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" event={"ID":"84881a52-e1ca-4125-b8ae-2aceed531e1b","Type":"ContainerStarted","Data":"3deb979b85aeb391fcb23a65b14169293eead7db17317d78bb31f31d59326254"} Nov 24 21:16:50 crc kubenswrapper[4801]: I1124 21:16:50.270504 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:16:52 crc kubenswrapper[4801]: I1124 21:16:52.288882 4801 generic.go:334] "Generic (PLEG): container finished" podID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerID="b585beac7fa95b9c8fcc03e678d9a2651133ae4fa97d47afcc90c19d1643f7e1" exitCode=0 Nov 24 21:16:52 crc kubenswrapper[4801]: I1124 21:16:52.288996 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" event={"ID":"84881a52-e1ca-4125-b8ae-2aceed531e1b","Type":"ContainerDied","Data":"b585beac7fa95b9c8fcc03e678d9a2651133ae4fa97d47afcc90c19d1643f7e1"} Nov 24 21:16:53 crc kubenswrapper[4801]: I1124 21:16:53.297280 4801 generic.go:334] "Generic (PLEG): container finished" podID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerID="479569f7a82801b50271fc5c5fac00f2c71262bbb2beb1b95b18cb5d233de034" exitCode=0 Nov 24 21:16:53 crc kubenswrapper[4801]: I1124 21:16:53.297597 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" event={"ID":"84881a52-e1ca-4125-b8ae-2aceed531e1b","Type":"ContainerDied","Data":"479569f7a82801b50271fc5c5fac00f2c71262bbb2beb1b95b18cb5d233de034"} Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.320277 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.320336 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.320394 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.320951 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a2c1c8c89f37badccd577363e8b4343fdabe58545188aa55faababfb8eeb353"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.321001 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://4a2c1c8c89f37badccd577363e8b4343fdabe58545188aa55faababfb8eeb353" gracePeriod=600 Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.603336 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.782983 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-util\") pod \"84881a52-e1ca-4125-b8ae-2aceed531e1b\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.783109 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-bundle\") pod \"84881a52-e1ca-4125-b8ae-2aceed531e1b\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.783224 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2cwm\" (UniqueName: \"kubernetes.io/projected/84881a52-e1ca-4125-b8ae-2aceed531e1b-kube-api-access-k2cwm\") pod \"84881a52-e1ca-4125-b8ae-2aceed531e1b\" (UID: \"84881a52-e1ca-4125-b8ae-2aceed531e1b\") " Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.787074 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-bundle" (OuterVolumeSpecName: "bundle") pod "84881a52-e1ca-4125-b8ae-2aceed531e1b" (UID: "84881a52-e1ca-4125-b8ae-2aceed531e1b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.790050 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84881a52-e1ca-4125-b8ae-2aceed531e1b-kube-api-access-k2cwm" (OuterVolumeSpecName: "kube-api-access-k2cwm") pod "84881a52-e1ca-4125-b8ae-2aceed531e1b" (UID: "84881a52-e1ca-4125-b8ae-2aceed531e1b"). InnerVolumeSpecName "kube-api-access-k2cwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.798486 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-util" (OuterVolumeSpecName: "util") pod "84881a52-e1ca-4125-b8ae-2aceed531e1b" (UID: "84881a52-e1ca-4125-b8ae-2aceed531e1b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.884972 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2cwm\" (UniqueName: \"kubernetes.io/projected/84881a52-e1ca-4125-b8ae-2aceed531e1b-kube-api-access-k2cwm\") on node \"crc\" DevicePath \"\"" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.885339 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:16:54 crc kubenswrapper[4801]: I1124 21:16:54.885354 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84881a52-e1ca-4125-b8ae-2aceed531e1b-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.312812 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" event={"ID":"84881a52-e1ca-4125-b8ae-2aceed531e1b","Type":"ContainerDied","Data":"3deb979b85aeb391fcb23a65b14169293eead7db17317d78bb31f31d59326254"} Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.312855 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5" Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.312869 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3deb979b85aeb391fcb23a65b14169293eead7db17317d78bb31f31d59326254" Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.315687 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="4a2c1c8c89f37badccd577363e8b4343fdabe58545188aa55faababfb8eeb353" exitCode=0 Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.315734 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"4a2c1c8c89f37badccd577363e8b4343fdabe58545188aa55faababfb8eeb353"} Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.315811 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"9a0d6aeedfbe81cd46691ce54a719da9e45f8039e5616fd23cf3d03c59f4c218"} Nov 24 21:16:55 crc kubenswrapper[4801]: I1124 21:16:55.315851 4801 scope.go:117] "RemoveContainer" containerID="af707f1810af6f57000f5ab08c007f351bf778badb03b5db24d60d19835c23bf" Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.877174 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrqff"] Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880123 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-controller" containerID="cri-o://b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880295 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-acl-logging" containerID="cri-o://009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880209 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880219 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="nbdb" containerID="cri-o://12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880284 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-node" containerID="cri-o://10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880291 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="sbdb" containerID="cri-o://07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.880627 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="northd" containerID="cri-o://98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5" gracePeriod=30 Nov 24 21:16:59 crc kubenswrapper[4801]: I1124 21:16:59.959834 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" containerID="cri-o://624b913e9b3d3311007ee1800920070fc6168d102b51d6a2cf31ad81fad53194" gracePeriod=30 Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.081482 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.081818 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.083122 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.085009 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.085061 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="nbdb" Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.090760 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.092709 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.092788 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="sbdb" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.362153 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/2.log" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.362692 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/1.log" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.362808 4801 generic.go:334] "Generic (PLEG): container finished" podID="5f348c59-5453-436a-bcce-548bdef22a27" containerID="bded6813a42903d93faa0cd462730b1d6b0fb08b0c64c2aa6280298df277b53a" exitCode=2 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.362930 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerDied","Data":"bded6813a42903d93faa0cd462730b1d6b0fb08b0c64c2aa6280298df277b53a"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.363016 4801 scope.go:117] "RemoveContainer" containerID="cfe48befe75a5f165ab4ae136f4da6013d3603008917bbeec6d7bc848c33416e" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.363628 4801 scope.go:117] "RemoveContainer" containerID="bded6813a42903d93faa0cd462730b1d6b0fb08b0c64c2aa6280298df277b53a" Nov 24 21:17:00 crc kubenswrapper[4801]: E1124 21:17:00.364146 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gdjvp_openshift-multus(5f348c59-5453-436a-bcce-548bdef22a27)\"" pod="openshift-multus/multus-gdjvp" podUID="5f348c59-5453-436a-bcce-548bdef22a27" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.366097 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovnkube-controller/3.log" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.368840 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-acl-logging/0.log" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.369394 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-controller/0.log" Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370117 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="624b913e9b3d3311007ee1800920070fc6168d102b51d6a2cf31ad81fad53194" exitCode=0 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370178 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894" exitCode=0 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370187 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785" exitCode=0 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370193 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5" exitCode=0 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370199 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa" exitCode=143 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370206 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d" exitCode=143 Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370238 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"624b913e9b3d3311007ee1800920070fc6168d102b51d6a2cf31ad81fad53194"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370334 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370353 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.370423 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d"} Nov 24 21:17:00 crc kubenswrapper[4801]: I1124 21:17:00.406490 4801 scope.go:117] "RemoveContainer" containerID="4cb9a49a516a3478527986e8157714a4ac4b60cbb83d8f47c1604b6620cbd713" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.377477 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/2.log" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.381887 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-acl-logging/0.log" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.382431 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-controller/0.log" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.382797 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d" exitCode=0 Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.382828 4801 generic.go:334] "Generic (PLEG): container finished" podID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerID="10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285" exitCode=0 Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.382860 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d"} Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.382919 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285"} Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.575381 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-acl-logging/0.log" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.575835 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-controller/0.log" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.576466 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696550 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-script-lib\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696631 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-bin\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696664 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-env-overrides\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696707 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxzz\" (UniqueName: \"kubernetes.io/projected/6757adc4-e0f2-49a6-8320-29cb96e4a10f-kube-api-access-mnxzz\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696753 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-config\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696775 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-systemd\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696817 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-log-socket\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696844 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-netns\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696873 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-etc-openvswitch\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696919 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-slash\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696958 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.696983 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-systemd-units\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697016 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-kubelet\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697048 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovn-node-metrics-cert\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697068 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-var-lib-openvswitch\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697118 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-node-log\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697150 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-ovn-kubernetes\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697174 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-netd\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697165 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697202 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-openvswitch\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697234 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-ovn\") pod \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\" (UID: \"6757adc4-e0f2-49a6-8320-29cb96e4a10f\") " Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697244 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-slash" (OuterVolumeSpecName: "host-slash") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697276 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697569 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697646 4801 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697668 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697681 4801 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697761 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697789 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.697807 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698449 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698443 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698472 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698515 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-node-log" (OuterVolumeSpecName: "node-log") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698542 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698571 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698596 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698618 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-log-socket" (OuterVolumeSpecName: "log-socket") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698639 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.698757 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.725386 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6757adc4-e0f2-49a6-8320-29cb96e4a10f-kube-api-access-mnxzz" (OuterVolumeSpecName: "kube-api-access-mnxzz") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "kube-api-access-mnxzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.733894 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9nh2x"] Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734175 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="northd" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734194 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="northd" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734207 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-acl-logging" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734216 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-acl-logging" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734227 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="pull" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734236 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="pull" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734246 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kubecfg-setup" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734253 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kubecfg-setup" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734264 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="util" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734272 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="util" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734285 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734292 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734299 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734304 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734317 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734325 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734337 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="sbdb" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734344 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="sbdb" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734352 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="nbdb" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734377 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="nbdb" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734393 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734399 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734407 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-node" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734412 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-node" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734422 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734428 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734439 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="extract" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734445 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="extract" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734455 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734461 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734557 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="84881a52-e1ca-4125-b8ae-2aceed531e1b" containerName="extract" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734567 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734574 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734585 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-node" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734592 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734601 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734610 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="sbdb" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734618 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="northd" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734626 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="nbdb" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734635 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovn-acl-logging" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734641 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: E1124 21:17:01.734740 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734748 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734839 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.734849 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" containerName="ovnkube-controller" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.736852 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.739166 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.750221 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6757adc4-e0f2-49a6-8320-29cb96e4a10f" (UID: "6757adc4-e0f2-49a6-8320-29cb96e4a10f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800711 4801 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800758 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnxzz\" (UniqueName: \"kubernetes.io/projected/6757adc4-e0f2-49a6-8320-29cb96e4a10f-kube-api-access-mnxzz\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800776 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800792 4801 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800802 4801 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800813 4801 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800823 4801 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800833 4801 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800843 4801 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800853 4801 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800865 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6757adc4-e0f2-49a6-8320-29cb96e4a10f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800875 4801 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800884 4801 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800893 4801 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800902 4801 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800911 4801 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.800921 4801 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757adc4-e0f2-49a6-8320-29cb96e4a10f-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902435 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-ovn\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902500 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-var-lib-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902520 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902550 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-etc-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902577 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-log-socket\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902598 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovnkube-script-lib\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902614 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6zm\" (UniqueName: \"kubernetes.io/projected/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-kube-api-access-7q6zm\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902642 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-cni-bin\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902669 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-node-log\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902699 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902724 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-env-overrides\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902740 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-systemd\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902758 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-run-netns\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902776 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-cni-netd\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902794 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovnkube-config\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902813 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovn-node-metrics-cert\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902831 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-kubelet\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902851 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902868 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-systemd-units\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:01 crc kubenswrapper[4801]: I1124 21:17:01.902885 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-slash\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004487 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-ovn\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004536 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-var-lib-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004584 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-etc-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004608 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-log-socket\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004627 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovnkube-script-lib\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004648 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6zm\" (UniqueName: \"kubernetes.io/projected/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-kube-api-access-7q6zm\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004672 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-cni-bin\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004689 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-node-log\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004718 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004771 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-env-overrides\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004838 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-run-netns\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004868 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-systemd\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004900 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-cni-netd\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004930 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovnkube-config\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004960 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovn-node-metrics-cert\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004980 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-kubelet\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005003 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005021 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-systemd-units\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005044 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-slash\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005080 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-cni-bin\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005104 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-node-log\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005383 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-etc-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005423 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-log-socket\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005418 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-kubelet\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.004697 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-var-lib-openvswitch\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005489 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-systemd\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005515 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-cni-netd\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005527 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-slash\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005568 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-host-run-netns\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005529 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-systemd-units\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005805 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovnkube-script-lib\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005915 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-run-ovn\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.005934 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovnkube-config\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.006555 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-env-overrides\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.012219 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-ovn-node-metrics-cert\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.024461 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6zm\" (UniqueName: \"kubernetes.io/projected/5616e2ad-5308-47d2-b2db-d8f3cffd4a04-kube-api-access-7q6zm\") pod \"ovnkube-node-9nh2x\" (UID: \"5616e2ad-5308-47d2-b2db-d8f3cffd4a04\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.078693 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.390403 4801 generic.go:334] "Generic (PLEG): container finished" podID="5616e2ad-5308-47d2-b2db-d8f3cffd4a04" containerID="7fb841d2ed2b9981c5177fdcd93cc8cc26bd6eb99bb712c93b5548d7c29e2c66" exitCode=0 Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.390577 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerDied","Data":"7fb841d2ed2b9981c5177fdcd93cc8cc26bd6eb99bb712c93b5548d7c29e2c66"} Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.391042 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"057062900ffb5af02dd294d7831d38b4e6c1093d41d1e688fbd0112587e165dc"} Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.395710 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-acl-logging/0.log" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.412813 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrqff_6757adc4-e0f2-49a6-8320-29cb96e4a10f/ovn-controller/0.log" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.421239 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" event={"ID":"6757adc4-e0f2-49a6-8320-29cb96e4a10f","Type":"ContainerDied","Data":"0ebcecc12d6be61054df541cbff7e74a2293b0beb8007195dd7ffab9eb10d4c5"} Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.421308 4801 scope.go:117] "RemoveContainer" containerID="624b913e9b3d3311007ee1800920070fc6168d102b51d6a2cf31ad81fad53194" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.421558 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrqff" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.492966 4801 scope.go:117] "RemoveContainer" containerID="07ff44a82cce51efde4b94ade3dd288d3852887c709d56a524b26e2b77d69894" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.532813 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrqff"] Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.540543 4801 scope.go:117] "RemoveContainer" containerID="12f87e6bf62d9dafbc14052722294999bca229ae6b3d642f784c9e1005e86785" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.544731 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrqff"] Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.592224 4801 scope.go:117] "RemoveContainer" containerID="98ea41de1eceb192c8ae27f5e40100535978c55a9ef5c1f1e0a45cfd08f28fe5" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.609351 4801 scope.go:117] "RemoveContainer" containerID="7981ac5dd8eabbee6b36988d98b5c2645cd414f06b5b670726f95c4abfc5b50d" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.634548 4801 scope.go:117] "RemoveContainer" containerID="10c24f019c954b3224fcf870d0d4cde9b7f840fcb5429a25c6e0195735058285" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.673189 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6757adc4-e0f2-49a6-8320-29cb96e4a10f" path="/var/lib/kubelet/pods/6757adc4-e0f2-49a6-8320-29cb96e4a10f/volumes" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.695405 4801 scope.go:117] "RemoveContainer" containerID="009ce3902ecba2cdded36228ba87a644a103a31cf01d1e0c80e7b2b5bb9ea7aa" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.730506 4801 scope.go:117] "RemoveContainer" containerID="b981bb563c34e6b5f294d5843d1ac5b9b341385c8b94b6535930c91631d0347d" Nov 24 21:17:02 crc kubenswrapper[4801]: I1124 21:17:02.765153 4801 scope.go:117] "RemoveContainer" containerID="224de381d36ce7e7d2c6c3126387a195b872501a24b6c9f65ff6ab36ebf02027" Nov 24 21:17:03 crc kubenswrapper[4801]: I1124 21:17:03.491401 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"623fbfddf9d36ce0b4f0324541386a5fd793193c5d12fa7c455e471b8a77c66c"} Nov 24 21:17:03 crc kubenswrapper[4801]: I1124 21:17:03.491876 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"2bedf6e9675639212875e3edc3bc6eeb89a78c2371596f9e485bb65e40ba1241"} Nov 24 21:17:03 crc kubenswrapper[4801]: I1124 21:17:03.491887 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"b6da3c3b7a284754a654cc27117de8e48b6c7f3bbe30da9a1389b48eea76e943"} Nov 24 21:17:03 crc kubenswrapper[4801]: I1124 21:17:03.491895 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"24d85d81f90b1ca89e8df1539e6b55bf840a4cb5e76958886bb9061b072e8b1a"} Nov 24 21:17:04 crc kubenswrapper[4801]: I1124 21:17:04.500095 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"13910c1fca7d538543667f0a5113cc23317e6c6b430cbf1c2c66883a1bad2b8f"} Nov 24 21:17:04 crc kubenswrapper[4801]: I1124 21:17:04.500560 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"041d4b57c5f808d494c3edfeafbdff002f1cf398ca0f353ae41892cafb587f70"} Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.517509 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"5280b7852db292317dba51b565dc6f35849f981672964865a4e084d6c7a3d28c"} Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.662914 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8"] Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.664165 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.668022 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9sg98" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.671877 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.675537 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.752671 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4"] Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.755563 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.758024 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.758452 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-p9wmt" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.774260 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f"] Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.777775 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.799993 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8tl\" (UniqueName: \"kubernetes.io/projected/375369a9-813f-42c3-8834-351eb5a1e296-kube-api-access-kv8tl\") pod \"obo-prometheus-operator-668cf9dfbb-45vj8\" (UID: \"375369a9-813f-42c3-8834-351eb5a1e296\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.901408 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7917bbe-8241-422f-b736-49a933738504-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4\" (UID: \"a7917bbe-8241-422f-b736-49a933738504\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.901462 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22cb726c-4ab9-4abd-8833-064874737125-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f\" (UID: \"22cb726c-4ab9-4abd-8833-064874737125\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.901492 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8tl\" (UniqueName: \"kubernetes.io/projected/375369a9-813f-42c3-8834-351eb5a1e296-kube-api-access-kv8tl\") pod \"obo-prometheus-operator-668cf9dfbb-45vj8\" (UID: \"375369a9-813f-42c3-8834-351eb5a1e296\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.901521 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22cb726c-4ab9-4abd-8833-064874737125-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f\" (UID: \"22cb726c-4ab9-4abd-8833-064874737125\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.901778 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7917bbe-8241-422f-b736-49a933738504-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4\" (UID: \"a7917bbe-8241-422f-b736-49a933738504\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.927400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8tl\" (UniqueName: \"kubernetes.io/projected/375369a9-813f-42c3-8834-351eb5a1e296-kube-api-access-kv8tl\") pod \"obo-prometheus-operator-668cf9dfbb-45vj8\" (UID: \"375369a9-813f-42c3-8834-351eb5a1e296\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.950313 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-p49bx"] Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.951061 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.954458 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.954468 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-w2l5t" Nov 24 21:17:06 crc kubenswrapper[4801]: I1124 21:17:06.979904 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.003593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7917bbe-8241-422f-b736-49a933738504-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4\" (UID: \"a7917bbe-8241-422f-b736-49a933738504\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.003643 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22cb726c-4ab9-4abd-8833-064874737125-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f\" (UID: \"22cb726c-4ab9-4abd-8833-064874737125\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.003672 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22cb726c-4ab9-4abd-8833-064874737125-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f\" (UID: \"22cb726c-4ab9-4abd-8833-064874737125\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.003755 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7917bbe-8241-422f-b736-49a933738504-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4\" (UID: \"a7917bbe-8241-422f-b736-49a933738504\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.007591 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7917bbe-8241-422f-b736-49a933738504-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4\" (UID: \"a7917bbe-8241-422f-b736-49a933738504\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.009222 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7917bbe-8241-422f-b736-49a933738504-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4\" (UID: \"a7917bbe-8241-422f-b736-49a933738504\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.009452 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22cb726c-4ab9-4abd-8833-064874737125-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f\" (UID: \"22cb726c-4ab9-4abd-8833-064874737125\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.014435 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(0abfd90a6f986b4835ebc6cc232b80d6876654a48c6428cfe14c72ae4c6aeb8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.014507 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(0abfd90a6f986b4835ebc6cc232b80d6876654a48c6428cfe14c72ae4c6aeb8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.014529 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(0abfd90a6f986b4835ebc6cc232b80d6876654a48c6428cfe14c72ae4c6aeb8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.014576 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators(375369a9-813f-42c3-8834-351eb5a1e296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators(375369a9-813f-42c3-8834-351eb5a1e296)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(0abfd90a6f986b4835ebc6cc232b80d6876654a48c6428cfe14c72ae4c6aeb8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" podUID="375369a9-813f-42c3-8834-351eb5a1e296" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.019048 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22cb726c-4ab9-4abd-8833-064874737125-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f\" (UID: \"22cb726c-4ab9-4abd-8833-064874737125\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.070573 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.083942 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-lwcrt"] Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.084763 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.088276 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-95k8p" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.092437 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(127c979a6ae7d9739937feb0ad1278bd2c3f77a8b1187f338369960a9666c7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.092495 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(127c979a6ae7d9739937feb0ad1278bd2c3f77a8b1187f338369960a9666c7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.092521 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(127c979a6ae7d9739937feb0ad1278bd2c3f77a8b1187f338369960a9666c7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.092572 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators(a7917bbe-8241-422f-b736-49a933738504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators(a7917bbe-8241-422f-b736-49a933738504)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(127c979a6ae7d9739937feb0ad1278bd2c3f77a8b1187f338369960a9666c7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" podUID="a7917bbe-8241-422f-b736-49a933738504" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.100809 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.105426 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/303202ae-884f-4b3e-a58a-77c294c81e7b-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-p49bx\" (UID: \"303202ae-884f-4b3e-a58a-77c294c81e7b\") " pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.105522 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqszb\" (UniqueName: \"kubernetes.io/projected/303202ae-884f-4b3e-a58a-77c294c81e7b-kube-api-access-kqszb\") pod \"observability-operator-d8bb48f5d-p49bx\" (UID: \"303202ae-884f-4b3e-a58a-77c294c81e7b\") " pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.128749 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(177baa59d5db5663a6aa88b517524de205b58f6b07aaec13edc2404a40656d6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.128820 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(177baa59d5db5663a6aa88b517524de205b58f6b07aaec13edc2404a40656d6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.128845 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(177baa59d5db5663a6aa88b517524de205b58f6b07aaec13edc2404a40656d6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.128897 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators(22cb726c-4ab9-4abd-8833-064874737125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators(22cb726c-4ab9-4abd-8833-064874737125)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(177baa59d5db5663a6aa88b517524de205b58f6b07aaec13edc2404a40656d6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" podUID="22cb726c-4ab9-4abd-8833-064874737125" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.206664 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqszb\" (UniqueName: \"kubernetes.io/projected/303202ae-884f-4b3e-a58a-77c294c81e7b-kube-api-access-kqszb\") pod \"observability-operator-d8bb48f5d-p49bx\" (UID: \"303202ae-884f-4b3e-a58a-77c294c81e7b\") " pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.206778 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/303202ae-884f-4b3e-a58a-77c294c81e7b-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-p49bx\" (UID: \"303202ae-884f-4b3e-a58a-77c294c81e7b\") " pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.206832 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mxx\" (UniqueName: \"kubernetes.io/projected/7cc4d145-d0b4-45a9-b424-e2f09c04c88e-kube-api-access-q7mxx\") pod \"perses-operator-5446b9c989-lwcrt\" (UID: \"7cc4d145-d0b4-45a9-b424-e2f09c04c88e\") " pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.206854 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cc4d145-d0b4-45a9-b424-e2f09c04c88e-openshift-service-ca\") pod \"perses-operator-5446b9c989-lwcrt\" (UID: \"7cc4d145-d0b4-45a9-b424-e2f09c04c88e\") " pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.211756 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/303202ae-884f-4b3e-a58a-77c294c81e7b-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-p49bx\" (UID: \"303202ae-884f-4b3e-a58a-77c294c81e7b\") " pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.225037 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqszb\" (UniqueName: \"kubernetes.io/projected/303202ae-884f-4b3e-a58a-77c294c81e7b-kube-api-access-kqszb\") pod \"observability-operator-d8bb48f5d-p49bx\" (UID: \"303202ae-884f-4b3e-a58a-77c294c81e7b\") " pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.268574 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.296482 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(f894bd501341847f2cdeb48f355d3368db0d6ef76643c1fa6742a68f5801ae56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.296624 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(f894bd501341847f2cdeb48f355d3368db0d6ef76643c1fa6742a68f5801ae56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.296699 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(f894bd501341847f2cdeb48f355d3368db0d6ef76643c1fa6742a68f5801ae56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.296797 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-p49bx_openshift-operators(303202ae-884f-4b3e-a58a-77c294c81e7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-p49bx_openshift-operators(303202ae-884f-4b3e-a58a-77c294c81e7b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(f894bd501341847f2cdeb48f355d3368db0d6ef76643c1fa6742a68f5801ae56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" podUID="303202ae-884f-4b3e-a58a-77c294c81e7b" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.308603 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cc4d145-d0b4-45a9-b424-e2f09c04c88e-openshift-service-ca\") pod \"perses-operator-5446b9c989-lwcrt\" (UID: \"7cc4d145-d0b4-45a9-b424-e2f09c04c88e\") " pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.308663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mxx\" (UniqueName: \"kubernetes.io/projected/7cc4d145-d0b4-45a9-b424-e2f09c04c88e-kube-api-access-q7mxx\") pod \"perses-operator-5446b9c989-lwcrt\" (UID: \"7cc4d145-d0b4-45a9-b424-e2f09c04c88e\") " pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.309639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cc4d145-d0b4-45a9-b424-e2f09c04c88e-openshift-service-ca\") pod \"perses-operator-5446b9c989-lwcrt\" (UID: \"7cc4d145-d0b4-45a9-b424-e2f09c04c88e\") " pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.326585 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mxx\" (UniqueName: \"kubernetes.io/projected/7cc4d145-d0b4-45a9-b424-e2f09c04c88e-kube-api-access-q7mxx\") pod \"perses-operator-5446b9c989-lwcrt\" (UID: \"7cc4d145-d0b4-45a9-b424-e2f09c04c88e\") " pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: I1124 21:17:07.442454 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.466190 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(d934914d0204a3c789a6f733524ad9b4417e44e7d381559753793d9e67ee835f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.466256 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(d934914d0204a3c789a6f733524ad9b4417e44e7d381559753793d9e67ee835f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.466279 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(d934914d0204a3c789a6f733524ad9b4417e44e7d381559753793d9e67ee835f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:07 crc kubenswrapper[4801]: E1124 21:17:07.466330 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-lwcrt_openshift-operators(7cc4d145-d0b4-45a9-b424-e2f09c04c88e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-lwcrt_openshift-operators(7cc4d145-d0b4-45a9-b424-e2f09c04c88e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(d934914d0204a3c789a6f733524ad9b4417e44e7d381559753793d9e67ee835f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" podUID="7cc4d145-d0b4-45a9-b424-e2f09c04c88e" Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.579902 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" event={"ID":"5616e2ad-5308-47d2-b2db-d8f3cffd4a04","Type":"ContainerStarted","Data":"4cd6d440583beb5a3e1ae34c4a8c56bebd375a9795fcdf1bc0f12acb0c4dc2af"} Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.584550 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.584641 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.584760 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.619379 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" podStartSLOduration=7.619344496 podStartE2EDuration="7.619344496s" podCreationTimestamp="2025-11-24 21:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:17:08.617593719 +0000 UTC m=+600.700180379" watchObservedRunningTime="2025-11-24 21:17:08.619344496 +0000 UTC m=+600.701931166" Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.629882 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:08 crc kubenswrapper[4801]: I1124 21:17:08.632636 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.553847 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f"] Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.553984 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.554723 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.567823 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-p49bx"] Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.568006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.568732 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.579219 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4"] Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.579621 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.580534 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.623639 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(f586b7a1800c5bbb47cf34c278e9e1fd555865f84a1c2b03ce87dd47f5ce2d44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.625100 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(f586b7a1800c5bbb47cf34c278e9e1fd555865f84a1c2b03ce87dd47f5ce2d44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.625353 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(f586b7a1800c5bbb47cf34c278e9e1fd555865f84a1c2b03ce87dd47f5ce2d44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.625597 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators(22cb726c-4ab9-4abd-8833-064874737125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators(22cb726c-4ab9-4abd-8833-064874737125)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(f586b7a1800c5bbb47cf34c278e9e1fd555865f84a1c2b03ce87dd47f5ce2d44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" podUID="22cb726c-4ab9-4abd-8833-064874737125" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.645315 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37a7254a9e3949c15ae199bf57051f929908c8bdf7c1684bf482cf7127c33a20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.645398 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37a7254a9e3949c15ae199bf57051f929908c8bdf7c1684bf482cf7127c33a20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.645428 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37a7254a9e3949c15ae199bf57051f929908c8bdf7c1684bf482cf7127c33a20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.645470 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-p49bx_openshift-operators(303202ae-884f-4b3e-a58a-77c294c81e7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-p49bx_openshift-operators(303202ae-884f-4b3e-a58a-77c294c81e7b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37a7254a9e3949c15ae199bf57051f929908c8bdf7c1684bf482cf7127c33a20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" podUID="303202ae-884f-4b3e-a58a-77c294c81e7b" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.662782 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(95418ba360cff7da09fc1025ab10a295d9810788ba138e43aa82469ce27fc485): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.662859 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(95418ba360cff7da09fc1025ab10a295d9810788ba138e43aa82469ce27fc485): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.662888 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(95418ba360cff7da09fc1025ab10a295d9810788ba138e43aa82469ce27fc485): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.662938 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators(a7917bbe-8241-422f-b736-49a933738504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators(a7917bbe-8241-422f-b736-49a933738504)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(95418ba360cff7da09fc1025ab10a295d9810788ba138e43aa82469ce27fc485): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" podUID="a7917bbe-8241-422f-b736-49a933738504" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.666642 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-lwcrt"] Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.666756 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.667219 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.674416 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8"] Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.674544 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:09 crc kubenswrapper[4801]: I1124 21:17:09.675095 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.728938 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(e949e6205d9e650abaae2b0b5bc2cae15e113e25418c607e0d86d761bfe9443d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.729003 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(e949e6205d9e650abaae2b0b5bc2cae15e113e25418c607e0d86d761bfe9443d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.729029 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(e949e6205d9e650abaae2b0b5bc2cae15e113e25418c607e0d86d761bfe9443d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.729075 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-lwcrt_openshift-operators(7cc4d145-d0b4-45a9-b424-e2f09c04c88e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-lwcrt_openshift-operators(7cc4d145-d0b4-45a9-b424-e2f09c04c88e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(e949e6205d9e650abaae2b0b5bc2cae15e113e25418c607e0d86d761bfe9443d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" podUID="7cc4d145-d0b4-45a9-b424-e2f09c04c88e" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.736008 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(7e2ac6a592799a72ee95a147d5ec7bdbca610cca8dfbecd0cd3f2c55860c6577): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.736057 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(7e2ac6a592799a72ee95a147d5ec7bdbca610cca8dfbecd0cd3f2c55860c6577): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.736079 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(7e2ac6a592799a72ee95a147d5ec7bdbca610cca8dfbecd0cd3f2c55860c6577): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:09 crc kubenswrapper[4801]: E1124 21:17:09.736127 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators(375369a9-813f-42c3-8834-351eb5a1e296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators(375369a9-813f-42c3-8834-351eb5a1e296)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(7e2ac6a592799a72ee95a147d5ec7bdbca610cca8dfbecd0cd3f2c55860c6577): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" podUID="375369a9-813f-42c3-8834-351eb5a1e296" Nov 24 21:17:11 crc kubenswrapper[4801]: I1124 21:17:11.663939 4801 scope.go:117] "RemoveContainer" containerID="bded6813a42903d93faa0cd462730b1d6b0fb08b0c64c2aa6280298df277b53a" Nov 24 21:17:11 crc kubenswrapper[4801]: E1124 21:17:11.665184 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gdjvp_openshift-multus(5f348c59-5453-436a-bcce-548bdef22a27)\"" pod="openshift-multus/multus-gdjvp" podUID="5f348c59-5453-436a-bcce-548bdef22a27" Nov 24 21:17:20 crc kubenswrapper[4801]: I1124 21:17:20.663284 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:20 crc kubenswrapper[4801]: I1124 21:17:20.664920 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:20 crc kubenswrapper[4801]: E1124 21:17:20.727147 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(9adc33577305878a55532a22dd6c3bbf0fedd3ec530bb82442551033726052d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:20 crc kubenswrapper[4801]: E1124 21:17:20.727243 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(9adc33577305878a55532a22dd6c3bbf0fedd3ec530bb82442551033726052d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:20 crc kubenswrapper[4801]: E1124 21:17:20.727285 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(9adc33577305878a55532a22dd6c3bbf0fedd3ec530bb82442551033726052d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:20 crc kubenswrapper[4801]: E1124 21:17:20.727351 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators(375369a9-813f-42c3-8834-351eb5a1e296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators(375369a9-813f-42c3-8834-351eb5a1e296)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-45vj8_openshift-operators_375369a9-813f-42c3-8834-351eb5a1e296_0(9adc33577305878a55532a22dd6c3bbf0fedd3ec530bb82442551033726052d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" podUID="375369a9-813f-42c3-8834-351eb5a1e296" Nov 24 21:17:21 crc kubenswrapper[4801]: I1124 21:17:21.668341 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:21 crc kubenswrapper[4801]: I1124 21:17:21.669412 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:21 crc kubenswrapper[4801]: E1124 21:17:21.698354 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37812aed5f2b3c88907a860f82f931e3475cd4e2a362dac35ccfd450bc8fcf72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:21 crc kubenswrapper[4801]: E1124 21:17:21.698509 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37812aed5f2b3c88907a860f82f931e3475cd4e2a362dac35ccfd450bc8fcf72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:21 crc kubenswrapper[4801]: E1124 21:17:21.698554 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37812aed5f2b3c88907a860f82f931e3475cd4e2a362dac35ccfd450bc8fcf72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:21 crc kubenswrapper[4801]: E1124 21:17:21.698654 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-p49bx_openshift-operators(303202ae-884f-4b3e-a58a-77c294c81e7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-p49bx_openshift-operators(303202ae-884f-4b3e-a58a-77c294c81e7b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-p49bx_openshift-operators_303202ae-884f-4b3e-a58a-77c294c81e7b_0(37812aed5f2b3c88907a860f82f931e3475cd4e2a362dac35ccfd450bc8fcf72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" podUID="303202ae-884f-4b3e-a58a-77c294c81e7b" Nov 24 21:17:22 crc kubenswrapper[4801]: I1124 21:17:22.663116 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:22 crc kubenswrapper[4801]: I1124 21:17:22.663262 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:22 crc kubenswrapper[4801]: I1124 21:17:22.664392 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:22 crc kubenswrapper[4801]: I1124 21:17:22.664575 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.696179 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(1033e5094e75bb202abaa76acda34addfb51a6c7f35181376283e75657d7cdd6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.696279 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(1033e5094e75bb202abaa76acda34addfb51a6c7f35181376283e75657d7cdd6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.696314 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(1033e5094e75bb202abaa76acda34addfb51a6c7f35181376283e75657d7cdd6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.696430 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-lwcrt_openshift-operators(7cc4d145-d0b4-45a9-b424-e2f09c04c88e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-lwcrt_openshift-operators(7cc4d145-d0b4-45a9-b424-e2f09c04c88e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-lwcrt_openshift-operators_7cc4d145-d0b4-45a9-b424-e2f09c04c88e_0(1033e5094e75bb202abaa76acda34addfb51a6c7f35181376283e75657d7cdd6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" podUID="7cc4d145-d0b4-45a9-b424-e2f09c04c88e" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.708088 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(4c4f8cd0a22516436bc253e907a9cd5a5c07b998d9b1bfb9d06638412e737ce3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.708209 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(4c4f8cd0a22516436bc253e907a9cd5a5c07b998d9b1bfb9d06638412e737ce3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.708239 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(4c4f8cd0a22516436bc253e907a9cd5a5c07b998d9b1bfb9d06638412e737ce3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:22 crc kubenswrapper[4801]: E1124 21:17:22.708319 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators(a7917bbe-8241-422f-b736-49a933738504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators(a7917bbe-8241-422f-b736-49a933738504)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_openshift-operators_a7917bbe-8241-422f-b736-49a933738504_0(4c4f8cd0a22516436bc253e907a9cd5a5c07b998d9b1bfb9d06638412e737ce3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" podUID="a7917bbe-8241-422f-b736-49a933738504" Nov 24 21:17:23 crc kubenswrapper[4801]: I1124 21:17:23.663923 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:23 crc kubenswrapper[4801]: I1124 21:17:23.664725 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:23 crc kubenswrapper[4801]: E1124 21:17:23.699103 4801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(d581fe080dca3435d1ed90dbb995aa48a7439bdd75c3118058090761ce80930d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:17:23 crc kubenswrapper[4801]: E1124 21:17:23.699218 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(d581fe080dca3435d1ed90dbb995aa48a7439bdd75c3118058090761ce80930d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:23 crc kubenswrapper[4801]: E1124 21:17:23.699250 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(d581fe080dca3435d1ed90dbb995aa48a7439bdd75c3118058090761ce80930d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:23 crc kubenswrapper[4801]: E1124 21:17:23.699337 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators(22cb726c-4ab9-4abd-8833-064874737125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators(22cb726c-4ab9-4abd-8833-064874737125)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_openshift-operators_22cb726c-4ab9-4abd-8833-064874737125_0(d581fe080dca3435d1ed90dbb995aa48a7439bdd75c3118058090761ce80930d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" podUID="22cb726c-4ab9-4abd-8833-064874737125" Nov 24 21:17:25 crc kubenswrapper[4801]: I1124 21:17:25.664407 4801 scope.go:117] "RemoveContainer" containerID="bded6813a42903d93faa0cd462730b1d6b0fb08b0c64c2aa6280298df277b53a" Nov 24 21:17:26 crc kubenswrapper[4801]: I1124 21:17:26.767520 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gdjvp_5f348c59-5453-436a-bcce-548bdef22a27/kube-multus/2.log" Nov 24 21:17:26 crc kubenswrapper[4801]: I1124 21:17:26.767943 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gdjvp" event={"ID":"5f348c59-5453-436a-bcce-548bdef22a27","Type":"ContainerStarted","Data":"953f1fc8a50d59b27cf06ba5d05607d30aa80b93ba06944537ffae2d1417e830"} Nov 24 21:17:32 crc kubenswrapper[4801]: I1124 21:17:32.112266 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nh2x" Nov 24 21:17:34 crc kubenswrapper[4801]: I1124 21:17:34.663884 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:34 crc kubenswrapper[4801]: I1124 21:17:34.663897 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:34 crc kubenswrapper[4801]: I1124 21:17:34.664946 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:34 crc kubenswrapper[4801]: I1124 21:17:34.665381 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" Nov 24 21:17:34 crc kubenswrapper[4801]: I1124 21:17:34.932398 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8"] Nov 24 21:17:34 crc kubenswrapper[4801]: I1124 21:17:34.996484 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-p49bx"] Nov 24 21:17:35 crc kubenswrapper[4801]: W1124 21:17:35.006914 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303202ae_884f_4b3e_a58a_77c294c81e7b.slice/crio-1929dce11ca8f3a1a6acbbd3f811a3c740aa96def9ce3dd3ea61dc56059e03ac WatchSource:0}: Error finding container 1929dce11ca8f3a1a6acbbd3f811a3c740aa96def9ce3dd3ea61dc56059e03ac: Status 404 returned error can't find the container with id 1929dce11ca8f3a1a6acbbd3f811a3c740aa96def9ce3dd3ea61dc56059e03ac Nov 24 21:17:35 crc kubenswrapper[4801]: I1124 21:17:35.663641 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:35 crc kubenswrapper[4801]: I1124 21:17:35.665062 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:35 crc kubenswrapper[4801]: I1124 21:17:35.853855 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" event={"ID":"375369a9-813f-42c3-8834-351eb5a1e296","Type":"ContainerStarted","Data":"1b443f0d26d34b97fc28b95f643d143f9899b4ba191a4e886b814193125863d9"} Nov 24 21:17:35 crc kubenswrapper[4801]: I1124 21:17:35.855328 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" event={"ID":"303202ae-884f-4b3e-a58a-77c294c81e7b","Type":"ContainerStarted","Data":"1929dce11ca8f3a1a6acbbd3f811a3c740aa96def9ce3dd3ea61dc56059e03ac"} Nov 24 21:17:35 crc kubenswrapper[4801]: I1124 21:17:35.928005 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-lwcrt"] Nov 24 21:17:35 crc kubenswrapper[4801]: W1124 21:17:35.933625 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc4d145_d0b4_45a9_b424_e2f09c04c88e.slice/crio-fb610b1361609bf5ad18b827062f2bdf1c82b042413148e63b352bc3c8337dd9 WatchSource:0}: Error finding container fb610b1361609bf5ad18b827062f2bdf1c82b042413148e63b352bc3c8337dd9: Status 404 returned error can't find the container with id fb610b1361609bf5ad18b827062f2bdf1c82b042413148e63b352bc3c8337dd9 Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.663257 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.664117 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.664584 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.664838 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.875252 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" event={"ID":"7cc4d145-d0b4-45a9-b424-e2f09c04c88e","Type":"ContainerStarted","Data":"fb610b1361609bf5ad18b827062f2bdf1c82b042413148e63b352bc3c8337dd9"} Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.935278 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f"] Nov 24 21:17:36 crc kubenswrapper[4801]: W1124 21:17:36.956916 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22cb726c_4ab9_4abd_8833_064874737125.slice/crio-f4df945171ff6fa7801e5b6b0817245098ac15340f61d97194b07b129f4ec720 WatchSource:0}: Error finding container f4df945171ff6fa7801e5b6b0817245098ac15340f61d97194b07b129f4ec720: Status 404 returned error can't find the container with id f4df945171ff6fa7801e5b6b0817245098ac15340f61d97194b07b129f4ec720 Nov 24 21:17:36 crc kubenswrapper[4801]: I1124 21:17:36.975149 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4"] Nov 24 21:17:36 crc kubenswrapper[4801]: W1124 21:17:36.995098 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7917bbe_8241_422f_b736_49a933738504.slice/crio-d98a429e195f263d7199643a9ffa4bdcfa353d50a2f71520d6ad41175cf4312e WatchSource:0}: Error finding container d98a429e195f263d7199643a9ffa4bdcfa353d50a2f71520d6ad41175cf4312e: Status 404 returned error can't find the container with id d98a429e195f263d7199643a9ffa4bdcfa353d50a2f71520d6ad41175cf4312e Nov 24 21:17:37 crc kubenswrapper[4801]: I1124 21:17:37.897020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" event={"ID":"22cb726c-4ab9-4abd-8833-064874737125","Type":"ContainerStarted","Data":"f4df945171ff6fa7801e5b6b0817245098ac15340f61d97194b07b129f4ec720"} Nov 24 21:17:37 crc kubenswrapper[4801]: I1124 21:17:37.899567 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" event={"ID":"a7917bbe-8241-422f-b736-49a933738504","Type":"ContainerStarted","Data":"d98a429e195f263d7199643a9ffa4bdcfa353d50a2f71520d6ad41175cf4312e"} Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.961699 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" event={"ID":"a7917bbe-8241-422f-b736-49a933738504","Type":"ContainerStarted","Data":"f1c3137adb10a92a903ffa64643ad72e8934be4169ef98db19a165c64003cf2e"} Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.963070 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" event={"ID":"375369a9-813f-42c3-8834-351eb5a1e296","Type":"ContainerStarted","Data":"0c2979e011bbd0a2778100354ab43622c014a6cf549e3a975bb2b13386a9c606"} Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.964479 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" event={"ID":"7cc4d145-d0b4-45a9-b424-e2f09c04c88e","Type":"ContainerStarted","Data":"6cbf900158d4315ecd216fa22dadeb300c5336f59804c92286ff3bb116491e5d"} Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.964579 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.966114 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" event={"ID":"303202ae-884f-4b3e-a58a-77c294c81e7b","Type":"ContainerStarted","Data":"f6742a009c53f06e8bcf3da76c2859aec32594a23420902ebd03f50b4ca4678d"} Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.966331 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.967744 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" event={"ID":"22cb726c-4ab9-4abd-8833-064874737125","Type":"ContainerStarted","Data":"f733143635f366e0b87535abdaa6d043a613175ca4b88a2eef30c8faf040ed17"} Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.968993 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" Nov 24 21:17:45 crc kubenswrapper[4801]: I1124 21:17:45.980846 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4" podStartSLOduration=31.996101353 podStartE2EDuration="39.980829373s" podCreationTimestamp="2025-11-24 21:17:06 +0000 UTC" firstStartedPulling="2025-11-24 21:17:36.999070599 +0000 UTC m=+629.081657269" lastFinishedPulling="2025-11-24 21:17:44.983798619 +0000 UTC m=+637.066385289" observedRunningTime="2025-11-24 21:17:45.977840126 +0000 UTC m=+638.060426806" watchObservedRunningTime="2025-11-24 21:17:45.980829373 +0000 UTC m=+638.063416043" Nov 24 21:17:46 crc kubenswrapper[4801]: I1124 21:17:46.023897 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f" podStartSLOduration=31.984791256 podStartE2EDuration="40.023882876s" podCreationTimestamp="2025-11-24 21:17:06 +0000 UTC" firstStartedPulling="2025-11-24 21:17:36.965859224 +0000 UTC m=+629.048445894" lastFinishedPulling="2025-11-24 21:17:45.004950844 +0000 UTC m=+637.087537514" observedRunningTime="2025-11-24 21:17:46.02277343 +0000 UTC m=+638.105360100" watchObservedRunningTime="2025-11-24 21:17:46.023882876 +0000 UTC m=+638.106469546" Nov 24 21:17:46 crc kubenswrapper[4801]: I1124 21:17:46.025695 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" podStartSLOduration=29.953904917 podStartE2EDuration="39.025689165s" podCreationTimestamp="2025-11-24 21:17:07 +0000 UTC" firstStartedPulling="2025-11-24 21:17:35.936817484 +0000 UTC m=+628.019404154" lastFinishedPulling="2025-11-24 21:17:45.008601732 +0000 UTC m=+637.091188402" observedRunningTime="2025-11-24 21:17:46.001275795 +0000 UTC m=+638.083862465" watchObservedRunningTime="2025-11-24 21:17:46.025689165 +0000 UTC m=+638.108275835" Nov 24 21:17:46 crc kubenswrapper[4801]: I1124 21:17:46.052650 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-45vj8" podStartSLOduration=29.984775315 podStartE2EDuration="40.052634397s" podCreationTimestamp="2025-11-24 21:17:06 +0000 UTC" firstStartedPulling="2025-11-24 21:17:34.95866887 +0000 UTC m=+627.041255540" lastFinishedPulling="2025-11-24 21:17:45.026527922 +0000 UTC m=+637.109114622" observedRunningTime="2025-11-24 21:17:46.047106018 +0000 UTC m=+638.129692688" watchObservedRunningTime="2025-11-24 21:17:46.052634397 +0000 UTC m=+638.135221067" Nov 24 21:17:46 crc kubenswrapper[4801]: I1124 21:17:46.077354 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-p49bx" podStartSLOduration=30.10724216 podStartE2EDuration="40.077337286s" podCreationTimestamp="2025-11-24 21:17:06 +0000 UTC" firstStartedPulling="2025-11-24 21:17:35.011074257 +0000 UTC m=+627.093660937" lastFinishedPulling="2025-11-24 21:17:44.981169383 +0000 UTC m=+637.063756063" observedRunningTime="2025-11-24 21:17:46.077034856 +0000 UTC m=+638.159621526" watchObservedRunningTime="2025-11-24 21:17:46.077337286 +0000 UTC m=+638.159923956" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.011660 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-84xwb"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.013337 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.016414 4801 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9z4rc" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.016500 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.016414 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.026937 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hdx96"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.028033 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hdx96" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.029907 4801 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6h8qk" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.033431 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxq75\" (UniqueName: \"kubernetes.io/projected/0e545ad2-65e7-4dd5-81ec-6c5726d5df36-kube-api-access-rxq75\") pod \"cert-manager-cainjector-7f985d654d-84xwb\" (UID: \"0e545ad2-65e7-4dd5-81ec-6c5726d5df36\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.033487 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5f7t\" (UniqueName: \"kubernetes.io/projected/abd371b6-063d-4063-a4e1-0ca1e9253b4c-kube-api-access-k5f7t\") pod \"cert-manager-5b446d88c5-hdx96\" (UID: \"abd371b6-063d-4063-a4e1-0ca1e9253b4c\") " pod="cert-manager/cert-manager-5b446d88c5-hdx96" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.035457 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hdx96"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.039586 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-g2kzb"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.040599 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.043523 4801 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4cd5z" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.057541 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-g2kzb"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.073974 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-84xwb"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.135531 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxq75\" (UniqueName: \"kubernetes.io/projected/0e545ad2-65e7-4dd5-81ec-6c5726d5df36-kube-api-access-rxq75\") pod \"cert-manager-cainjector-7f985d654d-84xwb\" (UID: \"0e545ad2-65e7-4dd5-81ec-6c5726d5df36\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.135639 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5f7t\" (UniqueName: \"kubernetes.io/projected/abd371b6-063d-4063-a4e1-0ca1e9253b4c-kube-api-access-k5f7t\") pod \"cert-manager-5b446d88c5-hdx96\" (UID: \"abd371b6-063d-4063-a4e1-0ca1e9253b4c\") " pod="cert-manager/cert-manager-5b446d88c5-hdx96" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.135707 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqv4z\" (UniqueName: \"kubernetes.io/projected/2d939e62-3c98-4d65-9da2-04d29b510399-kube-api-access-nqv4z\") pod \"cert-manager-webhook-5655c58dd6-g2kzb\" (UID: \"2d939e62-3c98-4d65-9da2-04d29b510399\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.157098 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5f7t\" (UniqueName: \"kubernetes.io/projected/abd371b6-063d-4063-a4e1-0ca1e9253b4c-kube-api-access-k5f7t\") pod \"cert-manager-5b446d88c5-hdx96\" (UID: \"abd371b6-063d-4063-a4e1-0ca1e9253b4c\") " pod="cert-manager/cert-manager-5b446d88c5-hdx96" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.159476 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxq75\" (UniqueName: \"kubernetes.io/projected/0e545ad2-65e7-4dd5-81ec-6c5726d5df36-kube-api-access-rxq75\") pod \"cert-manager-cainjector-7f985d654d-84xwb\" (UID: \"0e545ad2-65e7-4dd5-81ec-6c5726d5df36\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.237817 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqv4z\" (UniqueName: \"kubernetes.io/projected/2d939e62-3c98-4d65-9da2-04d29b510399-kube-api-access-nqv4z\") pod \"cert-manager-webhook-5655c58dd6-g2kzb\" (UID: \"2d939e62-3c98-4d65-9da2-04d29b510399\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.259502 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqv4z\" (UniqueName: \"kubernetes.io/projected/2d939e62-3c98-4d65-9da2-04d29b510399-kube-api-access-nqv4z\") pod \"cert-manager-webhook-5655c58dd6-g2kzb\" (UID: \"2d939e62-3c98-4d65-9da2-04d29b510399\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.328614 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.344758 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hdx96" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.358769 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.679714 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hdx96"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.725326 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-g2kzb"] Nov 24 21:17:52 crc kubenswrapper[4801]: I1124 21:17:52.824284 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-84xwb"] Nov 24 21:17:53 crc kubenswrapper[4801]: I1124 21:17:53.012627 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" event={"ID":"2d939e62-3c98-4d65-9da2-04d29b510399","Type":"ContainerStarted","Data":"b3fce4cdfba7bfd7f39f0260385ee6b14741bb27b2c450d983096df7eb69cfa4"} Nov 24 21:17:53 crc kubenswrapper[4801]: I1124 21:17:53.013624 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hdx96" event={"ID":"abd371b6-063d-4063-a4e1-0ca1e9253b4c","Type":"ContainerStarted","Data":"ae3758dece43ed50442aebb49025a0a2ddb81af9b71f06592e58d176ba1f7a88"} Nov 24 21:17:53 crc kubenswrapper[4801]: I1124 21:17:53.014600 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" event={"ID":"0e545ad2-65e7-4dd5-81ec-6c5726d5df36","Type":"ContainerStarted","Data":"f22e1bc54215de47a7c1b2d412c2b6b24f3c0145b53b3a210c2596cdea6ceb4a"} Nov 24 21:17:56 crc kubenswrapper[4801]: I1124 21:17:56.060498 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" event={"ID":"2d939e62-3c98-4d65-9da2-04d29b510399","Type":"ContainerStarted","Data":"729ffdfa2fb07da43a8465d2cd323da8fb4ba10c282e408b72913663c8aa442e"} Nov 24 21:17:56 crc kubenswrapper[4801]: I1124 21:17:56.061119 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:17:56 crc kubenswrapper[4801]: I1124 21:17:56.064795 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hdx96" event={"ID":"abd371b6-063d-4063-a4e1-0ca1e9253b4c","Type":"ContainerStarted","Data":"d47340424ef56af610523f3f29d9108cd979d64c6718e1016a961178f1ee30c0"} Nov 24 21:17:56 crc kubenswrapper[4801]: I1124 21:17:56.085357 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" podStartSLOduration=0.956032515 podStartE2EDuration="4.085332681s" podCreationTimestamp="2025-11-24 21:17:52 +0000 UTC" firstStartedPulling="2025-11-24 21:17:52.730951141 +0000 UTC m=+644.813537811" lastFinishedPulling="2025-11-24 21:17:55.860251307 +0000 UTC m=+647.942837977" observedRunningTime="2025-11-24 21:17:56.077432006 +0000 UTC m=+648.160018686" watchObservedRunningTime="2025-11-24 21:17:56.085332681 +0000 UTC m=+648.167919361" Nov 24 21:17:57 crc kubenswrapper[4801]: I1124 21:17:57.072174 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" event={"ID":"0e545ad2-65e7-4dd5-81ec-6c5726d5df36","Type":"ContainerStarted","Data":"f31d2e4a521a19428617d0eaa8930c6b0d03514cf8cf48dba477ec77036378b2"} Nov 24 21:17:57 crc kubenswrapper[4801]: I1124 21:17:57.107269 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-84xwb" podStartSLOduration=2.181894446 podStartE2EDuration="6.107235142s" podCreationTimestamp="2025-11-24 21:17:51 +0000 UTC" firstStartedPulling="2025-11-24 21:17:52.818198045 +0000 UTC m=+644.900784725" lastFinishedPulling="2025-11-24 21:17:56.743538751 +0000 UTC m=+648.826125421" observedRunningTime="2025-11-24 21:17:57.099174809 +0000 UTC m=+649.181761519" watchObservedRunningTime="2025-11-24 21:17:57.107235142 +0000 UTC m=+649.189821832" Nov 24 21:17:57 crc kubenswrapper[4801]: I1124 21:17:57.139490 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-hdx96" podStartSLOduration=3.036101798 podStartE2EDuration="6.139397811s" podCreationTimestamp="2025-11-24 21:17:51 +0000 UTC" firstStartedPulling="2025-11-24 21:17:52.690586926 +0000 UTC m=+644.773173596" lastFinishedPulling="2025-11-24 21:17:55.793882939 +0000 UTC m=+647.876469609" observedRunningTime="2025-11-24 21:17:56.094887291 +0000 UTC m=+648.177473961" watchObservedRunningTime="2025-11-24 21:17:57.139397811 +0000 UTC m=+649.221984491" Nov 24 21:17:57 crc kubenswrapper[4801]: I1124 21:17:57.447142 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-lwcrt" Nov 24 21:18:02 crc kubenswrapper[4801]: I1124 21:18:02.363053 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-g2kzb" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.740876 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk"] Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.744256 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.748161 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.753784 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk"] Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.833239 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.833547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfsx\" (UniqueName: \"kubernetes.io/projected/80d8329c-034d-457d-9e61-823b13e4e87e-kube-api-access-tjfsx\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.833876 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.934792 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.934847 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.934895 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjfsx\" (UniqueName: \"kubernetes.io/projected/80d8329c-034d-457d-9e61-823b13e4e87e-kube-api-access-tjfsx\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.935322 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.935433 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:25 crc kubenswrapper[4801]: I1124 21:18:25.956566 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjfsx\" (UniqueName: \"kubernetes.io/projected/80d8329c-034d-457d-9e61-823b13e4e87e-kube-api-access-tjfsx\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.062880 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.168129 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk"] Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.170172 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.177732 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk"] Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.343295 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.343394 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.343443 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6x6\" (UniqueName: \"kubernetes.io/projected/fd5002b4-4786-4560-82d1-57946fbf0b5c-kube-api-access-ml6x6\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.344895 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk"] Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.444566 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml6x6\" (UniqueName: \"kubernetes.io/projected/fd5002b4-4786-4560-82d1-57946fbf0b5c-kube-api-access-ml6x6\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.444649 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.444704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.445192 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.445492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.468321 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml6x6\" (UniqueName: \"kubernetes.io/projected/fd5002b4-4786-4560-82d1-57946fbf0b5c-kube-api-access-ml6x6\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.499712 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:26 crc kubenswrapper[4801]: I1124 21:18:26.968815 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk"] Nov 24 21:18:27 crc kubenswrapper[4801]: I1124 21:18:27.333386 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerID="49908f1084c4b2ffdd26b4301975d73fbbd1f65299b6078109294970feea37c6" exitCode=0 Nov 24 21:18:27 crc kubenswrapper[4801]: I1124 21:18:27.333434 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" event={"ID":"fd5002b4-4786-4560-82d1-57946fbf0b5c","Type":"ContainerDied","Data":"49908f1084c4b2ffdd26b4301975d73fbbd1f65299b6078109294970feea37c6"} Nov 24 21:18:27 crc kubenswrapper[4801]: I1124 21:18:27.333851 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" event={"ID":"fd5002b4-4786-4560-82d1-57946fbf0b5c","Type":"ContainerStarted","Data":"822395db336549da7acedf2cc8fcfccc07d920857e7b7703887d8f8cc78d9058"} Nov 24 21:18:27 crc kubenswrapper[4801]: I1124 21:18:27.335896 4801 generic.go:334] "Generic (PLEG): container finished" podID="80d8329c-034d-457d-9e61-823b13e4e87e" containerID="9de4f1b8b7117da023912a1ad3950b843bcecd951d842960456cdee1d05f7709" exitCode=0 Nov 24 21:18:27 crc kubenswrapper[4801]: I1124 21:18:27.335927 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" event={"ID":"80d8329c-034d-457d-9e61-823b13e4e87e","Type":"ContainerDied","Data":"9de4f1b8b7117da023912a1ad3950b843bcecd951d842960456cdee1d05f7709"} Nov 24 21:18:27 crc kubenswrapper[4801]: I1124 21:18:27.335944 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" event={"ID":"80d8329c-034d-457d-9e61-823b13e4e87e","Type":"ContainerStarted","Data":"c67e516f5ba5f830dc35571807c658d458dc42eb60b2a84e050c0deceb64a1d6"} Nov 24 21:18:29 crc kubenswrapper[4801]: I1124 21:18:29.360067 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerID="d56024e6bd31e8f2f35ae3127b9487b551ea85a8b8f1c3b6ee299edb8da909e8" exitCode=0 Nov 24 21:18:29 crc kubenswrapper[4801]: I1124 21:18:29.360144 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" event={"ID":"fd5002b4-4786-4560-82d1-57946fbf0b5c","Type":"ContainerDied","Data":"d56024e6bd31e8f2f35ae3127b9487b551ea85a8b8f1c3b6ee299edb8da909e8"} Nov 24 21:18:29 crc kubenswrapper[4801]: I1124 21:18:29.366395 4801 generic.go:334] "Generic (PLEG): container finished" podID="80d8329c-034d-457d-9e61-823b13e4e87e" containerID="15770d0aa6884d6a5036edd2150c9dd047698f96b6b80b73f438a8905a804561" exitCode=0 Nov 24 21:18:29 crc kubenswrapper[4801]: I1124 21:18:29.366448 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" event={"ID":"80d8329c-034d-457d-9e61-823b13e4e87e","Type":"ContainerDied","Data":"15770d0aa6884d6a5036edd2150c9dd047698f96b6b80b73f438a8905a804561"} Nov 24 21:18:30 crc kubenswrapper[4801]: I1124 21:18:30.374581 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerID="283d5153e36f61c08c16a7fbfabdc9b9f525ec892f4af6f08c4259a82119c669" exitCode=0 Nov 24 21:18:30 crc kubenswrapper[4801]: I1124 21:18:30.374646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" event={"ID":"fd5002b4-4786-4560-82d1-57946fbf0b5c","Type":"ContainerDied","Data":"283d5153e36f61c08c16a7fbfabdc9b9f525ec892f4af6f08c4259a82119c669"} Nov 24 21:18:30 crc kubenswrapper[4801]: I1124 21:18:30.377305 4801 generic.go:334] "Generic (PLEG): container finished" podID="80d8329c-034d-457d-9e61-823b13e4e87e" containerID="8a42ae3e5aa55628d8f5c44be4b3e9959c1577981828a1143f9be7564f4218f9" exitCode=0 Nov 24 21:18:30 crc kubenswrapper[4801]: I1124 21:18:30.377338 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" event={"ID":"80d8329c-034d-457d-9e61-823b13e4e87e","Type":"ContainerDied","Data":"8a42ae3e5aa55628d8f5c44be4b3e9959c1577981828a1143f9be7564f4218f9"} Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.762637 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.769480 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.937709 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjfsx\" (UniqueName: \"kubernetes.io/projected/80d8329c-034d-457d-9e61-823b13e4e87e-kube-api-access-tjfsx\") pod \"80d8329c-034d-457d-9e61-823b13e4e87e\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.938305 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-bundle\") pod \"fd5002b4-4786-4560-82d1-57946fbf0b5c\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.938391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-util\") pod \"80d8329c-034d-457d-9e61-823b13e4e87e\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.938443 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-bundle\") pod \"80d8329c-034d-457d-9e61-823b13e4e87e\" (UID: \"80d8329c-034d-457d-9e61-823b13e4e87e\") " Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.938590 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-util\") pod \"fd5002b4-4786-4560-82d1-57946fbf0b5c\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.938690 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml6x6\" (UniqueName: \"kubernetes.io/projected/fd5002b4-4786-4560-82d1-57946fbf0b5c-kube-api-access-ml6x6\") pod \"fd5002b4-4786-4560-82d1-57946fbf0b5c\" (UID: \"fd5002b4-4786-4560-82d1-57946fbf0b5c\") " Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.940183 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-bundle" (OuterVolumeSpecName: "bundle") pod "80d8329c-034d-457d-9e61-823b13e4e87e" (UID: "80d8329c-034d-457d-9e61-823b13e4e87e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.940429 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-bundle" (OuterVolumeSpecName: "bundle") pod "fd5002b4-4786-4560-82d1-57946fbf0b5c" (UID: "fd5002b4-4786-4560-82d1-57946fbf0b5c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.946092 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5002b4-4786-4560-82d1-57946fbf0b5c-kube-api-access-ml6x6" (OuterVolumeSpecName: "kube-api-access-ml6x6") pod "fd5002b4-4786-4560-82d1-57946fbf0b5c" (UID: "fd5002b4-4786-4560-82d1-57946fbf0b5c"). InnerVolumeSpecName "kube-api-access-ml6x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.951183 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d8329c-034d-457d-9e61-823b13e4e87e-kube-api-access-tjfsx" (OuterVolumeSpecName: "kube-api-access-tjfsx") pod "80d8329c-034d-457d-9e61-823b13e4e87e" (UID: "80d8329c-034d-457d-9e61-823b13e4e87e"). InnerVolumeSpecName "kube-api-access-tjfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:18:31 crc kubenswrapper[4801]: I1124 21:18:31.955077 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-util" (OuterVolumeSpecName: "util") pod "fd5002b4-4786-4560-82d1-57946fbf0b5c" (UID: "fd5002b4-4786-4560-82d1-57946fbf0b5c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.004239 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-util" (OuterVolumeSpecName: "util") pod "80d8329c-034d-457d-9e61-823b13e4e87e" (UID: "80d8329c-034d-457d-9e61-823b13e4e87e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.040592 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjfsx\" (UniqueName: \"kubernetes.io/projected/80d8329c-034d-457d-9e61-823b13e4e87e-kube-api-access-tjfsx\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.040636 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.040646 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.040655 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80d8329c-034d-457d-9e61-823b13e4e87e-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.040664 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd5002b4-4786-4560-82d1-57946fbf0b5c-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.040672 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml6x6\" (UniqueName: \"kubernetes.io/projected/fd5002b4-4786-4560-82d1-57946fbf0b5c-kube-api-access-ml6x6\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.394714 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" event={"ID":"fd5002b4-4786-4560-82d1-57946fbf0b5c","Type":"ContainerDied","Data":"822395db336549da7acedf2cc8fcfccc07d920857e7b7703887d8f8cc78d9058"} Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.394750 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.394770 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822395db336549da7acedf2cc8fcfccc07d920857e7b7703887d8f8cc78d9058" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.397993 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" event={"ID":"80d8329c-034d-457d-9e61-823b13e4e87e","Type":"ContainerDied","Data":"c67e516f5ba5f830dc35571807c658d458dc42eb60b2a84e050c0deceb64a1d6"} Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.398047 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67e516f5ba5f830dc35571807c658d458dc42eb60b2a84e050c0deceb64a1d6" Nov 24 21:18:32 crc kubenswrapper[4801]: I1124 21:18:32.398152 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.969910 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm"] Nov 24 21:18:42 crc kubenswrapper[4801]: E1124 21:18:42.970912 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="pull" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.970929 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="pull" Nov 24 21:18:42 crc kubenswrapper[4801]: E1124 21:18:42.970954 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="extract" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.970962 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="extract" Nov 24 21:18:42 crc kubenswrapper[4801]: E1124 21:18:42.970973 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="util" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.970979 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="util" Nov 24 21:18:42 crc kubenswrapper[4801]: E1124 21:18:42.970991 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="extract" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.970997 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="extract" Nov 24 21:18:42 crc kubenswrapper[4801]: E1124 21:18:42.971005 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="pull" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.971010 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="pull" Nov 24 21:18:42 crc kubenswrapper[4801]: E1124 21:18:42.971019 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="util" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.971025 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="util" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.971202 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5002b4-4786-4560-82d1-57946fbf0b5c" containerName="extract" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.971219 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d8329c-034d-457d-9e61-823b13e4e87e" containerName="extract" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.972093 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.974427 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.975221 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.980395 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm"] Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.980803 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.980833 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.981070 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 24 21:18:42 crc kubenswrapper[4801]: I1124 21:18:42.981116 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-s77gq" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.111867 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkcw\" (UniqueName: \"kubernetes.io/projected/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-kube-api-access-cfkcw\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.112234 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.112260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-webhook-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.112309 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-apiservice-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.112335 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-manager-config\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.214220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-webhook-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.214308 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-apiservice-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.214345 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-manager-config\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.214428 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfkcw\" (UniqueName: \"kubernetes.io/projected/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-kube-api-access-cfkcw\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.214469 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.215541 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-manager-config\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.222024 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-apiservice-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.222063 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.238859 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfkcw\" (UniqueName: \"kubernetes.io/projected/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-kube-api-access-cfkcw\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.242227 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc08462b-4c52-4a4c-8e0d-30446d2b9a57-webhook-cert\") pod \"loki-operator-controller-manager-5c79fb6df8-dkhwm\" (UID: \"bc08462b-4c52-4a4c-8e0d-30446d2b9a57\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.290290 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:43 crc kubenswrapper[4801]: I1124 21:18:43.642021 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm"] Nov 24 21:18:43 crc kubenswrapper[4801]: W1124 21:18:43.653726 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc08462b_4c52_4a4c_8e0d_30446d2b9a57.slice/crio-1e64350823059e220a173ea954f84be4e66c3bcafd7b95257e527241101fc8d3 WatchSource:0}: Error finding container 1e64350823059e220a173ea954f84be4e66c3bcafd7b95257e527241101fc8d3: Status 404 returned error can't find the container with id 1e64350823059e220a173ea954f84be4e66c3bcafd7b95257e527241101fc8d3 Nov 24 21:18:44 crc kubenswrapper[4801]: I1124 21:18:44.489588 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" event={"ID":"bc08462b-4c52-4a4c-8e0d-30446d2b9a57","Type":"ContainerStarted","Data":"1e64350823059e220a173ea954f84be4e66c3bcafd7b95257e527241101fc8d3"} Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.976780 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-kvjtv"] Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.978781 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.981785 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.981890 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.986099 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-44z6t" Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.992252 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhnm\" (UniqueName: \"kubernetes.io/projected/cc709992-f0d1-4d54-abcc-06c28f330196-kube-api-access-hlhnm\") pod \"cluster-logging-operator-ff9846bd-kvjtv\" (UID: \"cc709992-f0d1-4d54-abcc-06c28f330196\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" Nov 24 21:18:46 crc kubenswrapper[4801]: I1124 21:18:46.996720 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-kvjtv"] Nov 24 21:18:47 crc kubenswrapper[4801]: I1124 21:18:47.093375 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhnm\" (UniqueName: \"kubernetes.io/projected/cc709992-f0d1-4d54-abcc-06c28f330196-kube-api-access-hlhnm\") pod \"cluster-logging-operator-ff9846bd-kvjtv\" (UID: \"cc709992-f0d1-4d54-abcc-06c28f330196\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" Nov 24 21:18:47 crc kubenswrapper[4801]: I1124 21:18:47.118408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhnm\" (UniqueName: \"kubernetes.io/projected/cc709992-f0d1-4d54-abcc-06c28f330196-kube-api-access-hlhnm\") pod \"cluster-logging-operator-ff9846bd-kvjtv\" (UID: \"cc709992-f0d1-4d54-abcc-06c28f330196\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" Nov 24 21:18:47 crc kubenswrapper[4801]: I1124 21:18:47.309538 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" Nov 24 21:18:48 crc kubenswrapper[4801]: I1124 21:18:48.968116 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-kvjtv"] Nov 24 21:18:49 crc kubenswrapper[4801]: I1124 21:18:49.544050 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" event={"ID":"cc709992-f0d1-4d54-abcc-06c28f330196","Type":"ContainerStarted","Data":"7208aa8c5c9319b764cdd234f4819540d649f03ef3ac0a6cb8926f84495da513"} Nov 24 21:18:49 crc kubenswrapper[4801]: I1124 21:18:49.545883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" event={"ID":"bc08462b-4c52-4a4c-8e0d-30446d2b9a57","Type":"ContainerStarted","Data":"3c8ce10a6f2984bec6a4e862c071ff0c9df6752527c73ef15b18e510164cae92"} Nov 24 21:18:54 crc kubenswrapper[4801]: I1124 21:18:54.320726 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:18:54 crc kubenswrapper[4801]: I1124 21:18:54.321702 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:18:58 crc kubenswrapper[4801]: I1124 21:18:58.632255 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" event={"ID":"bc08462b-4c52-4a4c-8e0d-30446d2b9a57","Type":"ContainerStarted","Data":"0d5ef3e87061bdc1cd0fc9e50fda8ce60da166cc91dd63513c54633fc2d358d6"} Nov 24 21:18:58 crc kubenswrapper[4801]: I1124 21:18:58.633055 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:58 crc kubenswrapper[4801]: I1124 21:18:58.634483 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" Nov 24 21:18:58 crc kubenswrapper[4801]: I1124 21:18:58.637658 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" event={"ID":"cc709992-f0d1-4d54-abcc-06c28f330196","Type":"ContainerStarted","Data":"e8f94aea8423916ba57dd14f6296625fca187bebc206ab2ab2763867ad8edf23"} Nov 24 21:18:58 crc kubenswrapper[4801]: I1124 21:18:58.695147 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5c79fb6df8-dkhwm" podStartSLOduration=2.138801052 podStartE2EDuration="16.695125354s" podCreationTimestamp="2025-11-24 21:18:42 +0000 UTC" firstStartedPulling="2025-11-24 21:18:43.656912089 +0000 UTC m=+695.739498759" lastFinishedPulling="2025-11-24 21:18:58.213236381 +0000 UTC m=+710.295823061" observedRunningTime="2025-11-24 21:18:58.665396788 +0000 UTC m=+710.747983458" watchObservedRunningTime="2025-11-24 21:18:58.695125354 +0000 UTC m=+710.777712014" Nov 24 21:18:58 crc kubenswrapper[4801]: I1124 21:18:58.718125 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-kvjtv" podStartSLOduration=3.506859113 podStartE2EDuration="12.718101547s" podCreationTimestamp="2025-11-24 21:18:46 +0000 UTC" firstStartedPulling="2025-11-24 21:18:48.983257512 +0000 UTC m=+701.065844172" lastFinishedPulling="2025-11-24 21:18:58.194499936 +0000 UTC m=+710.277086606" observedRunningTime="2025-11-24 21:18:58.713697393 +0000 UTC m=+710.796284133" watchObservedRunningTime="2025-11-24 21:18:58.718101547 +0000 UTC m=+710.800688217" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.078860 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.081690 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.085493 4801 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-525k5" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.085519 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.086834 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.096536 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.110869 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpzqw\" (UniqueName: \"kubernetes.io/projected/b2c6eb7a-f984-4618-9f37-8b618794cea6-kube-api-access-qpzqw\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") " pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.110915 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") " pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.212446 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpzqw\" (UniqueName: \"kubernetes.io/projected/b2c6eb7a-f984-4618-9f37-8b618794cea6-kube-api-access-qpzqw\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") " pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.212506 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") " pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.217316 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.217391 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fcd25713369b8ab6fe4ebf8c6c244bc966bd2bd47f0e242eb7d44cefc6eaaefe/globalmount\"" pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.237430 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpzqw\" (UniqueName: \"kubernetes.io/projected/b2c6eb7a-f984-4618-9f37-8b618794cea6-kube-api-access-qpzqw\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") " pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.249087 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e7cdff-1bd7-411e-a80f-0849bf4e4ff8\") pod \"minio\" (UID: \"b2c6eb7a-f984-4618-9f37-8b618794cea6\") " pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.406453 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 24 21:19:04 crc kubenswrapper[4801]: I1124 21:19:04.727689 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 24 21:19:05 crc kubenswrapper[4801]: I1124 21:19:05.710030 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b2c6eb7a-f984-4618-9f37-8b618794cea6","Type":"ContainerStarted","Data":"46200eae2a8df29f125188b76c2a522803a5de1080c51fd4ca4fe340c3a2597e"} Nov 24 21:19:08 crc kubenswrapper[4801]: I1124 21:19:08.729816 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b2c6eb7a-f984-4618-9f37-8b618794cea6","Type":"ContainerStarted","Data":"dc81ef6f1419abd8159d6b64d89f28185e0b539bc0a05617f1284b05a3131ec3"} Nov 24 21:19:08 crc kubenswrapper[4801]: I1124 21:19:08.751444 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.528323781 podStartE2EDuration="7.751425275s" podCreationTimestamp="2025-11-24 21:19:01 +0000 UTC" firstStartedPulling="2025-11-24 21:19:04.746693723 +0000 UTC m=+716.829280393" lastFinishedPulling="2025-11-24 21:19:07.969795177 +0000 UTC m=+720.052381887" observedRunningTime="2025-11-24 21:19:08.746636707 +0000 UTC m=+720.829223387" watchObservedRunningTime="2025-11-24 21:19:08.751425275 +0000 UTC m=+720.834011945" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.912830 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-xp22s"] Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.916828 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.918853 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.921177 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-mwh2p" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.921489 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.921603 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.923048 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 24 21:19:12 crc kubenswrapper[4801]: I1124 21:19:12.925180 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-xp22s"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.082410 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-sldkd"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.082955 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.082994 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd749\" (UniqueName: \"kubernetes.io/projected/08034d9d-5888-426f-9a8c-137de45fef21-kube-api-access-wd749\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.083049 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08034d9d-5888-426f-9a8c-137de45fef21-config\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.083087 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.083116 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.083247 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.086440 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.086486 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.086590 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.101954 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-sldkd"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.173915 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.175119 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.176934 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.179236 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.184056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08034d9d-5888-426f-9a8c-137de45fef21-config\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.184144 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.184191 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.184242 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.184278 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd749\" (UniqueName: \"kubernetes.io/projected/08034d9d-5888-426f-9a8c-137de45fef21-kube-api-access-wd749\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.185815 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08034d9d-5888-426f-9a8c-137de45fef21-config\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.192160 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.193041 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.194127 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.203420 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/08034d9d-5888-426f-9a8c-137de45fef21-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.208457 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd749\" (UniqueName: \"kubernetes.io/projected/08034d9d-5888-426f-9a8c-137de45fef21-kube-api-access-wd749\") pod \"logging-loki-distributor-76cc67bf56-xp22s\" (UID: \"08034d9d-5888-426f-9a8c-137de45fef21\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.274607 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-57b544c797-tk7zw"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.277677 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.282716 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.282876 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.283100 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.283277 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.285490 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.285982 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286573 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3696f2a-d943-47e9-b634-49e7293e64db-config\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286647 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286719 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqs2j\" (UniqueName: \"kubernetes.io/projected/0196a92d-1cc8-4f72-922e-692ca28b2d88-kube-api-access-gqs2j\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c99s\" (UniqueName: \"kubernetes.io/projected/f3696f2a-d943-47e9-b634-49e7293e64db-kube-api-access-4c99s\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286817 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286908 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286936 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286961 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.286989 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0196a92d-1cc8-4f72-922e-692ca28b2d88-config\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.287023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.310000 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-57b544c797-tk7zw"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.323952 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-57b544c797-dvfrt"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.325952 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.338358 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-57b544c797-dvfrt"] Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.338747 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-h2xt8" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388312 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqs2j\" (UniqueName: \"kubernetes.io/projected/0196a92d-1cc8-4f72-922e-692ca28b2d88-kube-api-access-gqs2j\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388358 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c99s\" (UniqueName: \"kubernetes.io/projected/f3696f2a-d943-47e9-b634-49e7293e64db-kube-api-access-4c99s\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388440 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-tls-secret\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388472 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388502 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388536 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-rbac\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388595 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388623 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388648 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388679 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0196a92d-1cc8-4f72-922e-692ca28b2d88-config\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388720 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388774 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4xdq\" (UniqueName: \"kubernetes.io/projected/3ac04356-36c8-4670-b849-cacb649d9a9a-kube-api-access-b4xdq\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388809 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388846 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3696f2a-d943-47e9-b634-49e7293e64db-config\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388888 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388925 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.388964 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-lokistack-gateway\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.389002 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-tenants\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.389881 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.390048 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0196a92d-1cc8-4f72-922e-692ca28b2d88-config\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.390446 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3696f2a-d943-47e9-b634-49e7293e64db-config\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.392280 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.394984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.395219 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.395381 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0196a92d-1cc8-4f72-922e-692ca28b2d88-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.395801 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.408860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f3696f2a-d943-47e9-b634-49e7293e64db-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.419560 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c99s\" (UniqueName: \"kubernetes.io/projected/f3696f2a-d943-47e9-b634-49e7293e64db-kube-api-access-4c99s\") pod \"logging-loki-query-frontend-84558f7c9f-gw76f\" (UID: \"f3696f2a-d943-47e9-b634-49e7293e64db\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.422661 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqs2j\" (UniqueName: \"kubernetes.io/projected/0196a92d-1cc8-4f72-922e-692ca28b2d88-kube-api-access-gqs2j\") pod \"logging-loki-querier-5895d59bb8-sldkd\" (UID: \"0196a92d-1cc8-4f72-922e-692ca28b2d88\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491477 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-rbac\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491611 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491636 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-lokistack-gateway\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491681 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-tenants\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491704 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491753 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9fbt\" (UniqueName: \"kubernetes.io/projected/145468bd-4c70-49ef-a013-6cc672232c5e-kube-api-access-d9fbt\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491786 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-tls-secret\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491831 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491855 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-rbac\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491908 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-lokistack-gateway\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491929 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-tenants\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491950 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-tls-secret\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.491994 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.492017 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4xdq\" (UniqueName: \"kubernetes.io/projected/3ac04356-36c8-4670-b849-cacb649d9a9a-kube-api-access-b4xdq\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.492062 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.493177 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.494035 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-lokistack-gateway\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.496260 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-rbac\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.496329 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.504834 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.506212 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-tls-secret\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.508488 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3ac04356-36c8-4670-b849-cacb649d9a9a-tenants\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.515888 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4xdq\" (UniqueName: \"kubernetes.io/projected/3ac04356-36c8-4670-b849-cacb649d9a9a-kube-api-access-b4xdq\") pod \"logging-loki-gateway-57b544c797-tk7zw\" (UID: \"3ac04356-36c8-4670-b849-cacb649d9a9a\") " pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.564956 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602699 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-rbac\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602822 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602865 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602897 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9fbt\" (UniqueName: \"kubernetes.io/projected/145468bd-4c70-49ef-a013-6cc672232c5e-kube-api-access-d9fbt\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-lokistack-gateway\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.602985 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-tenants\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.603010 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-tls-secret\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.603853 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.604594 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-lokistack-gateway\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.605258 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-rbac\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.605896 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.616087 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-tenants\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.623855 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.635984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9fbt\" (UniqueName: \"kubernetes.io/projected/145468bd-4c70-49ef-a013-6cc672232c5e-kube-api-access-d9fbt\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.644601 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.645147 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145468bd-4c70-49ef-a013-6cc672232c5e-tls-secret\") pod \"logging-loki-gateway-57b544c797-dvfrt\" (UID: \"145468bd-4c70-49ef-a013-6cc672232c5e\") " pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.691787 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.709459 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:13 crc kubenswrapper[4801]: I1124 21:19:13.858084 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-xp22s"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.064798 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.065702 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.071153 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.071248 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.076674 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.145735 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.147330 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.149961 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.150139 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.156062 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218070 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218153 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218189 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218231 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218248 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbz67\" (UniqueName: \"kubernetes.io/projected/36d32422-038a-496e-9b7f-8616b90efb2b-kube-api-access-bbz67\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218274 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218305 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e8110b83-7721-451f-bd04-f9604fe55620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8110b83-7721-451f-bd04-f9604fe55620\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.218328 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36d32422-038a-496e-9b7f-8616b90efb2b-config\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.230802 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.231715 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.234871 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.235017 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.243020 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320357 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320436 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320469 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbz67\" (UniqueName: \"kubernetes.io/projected/36d32422-038a-496e-9b7f-8616b90efb2b-kube-api-access-bbz67\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320493 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320525 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320548 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36d32422-038a-496e-9b7f-8616b90efb2b-config\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320572 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320588 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrqv9\" (UniqueName: \"kubernetes.io/projected/597f607f-f6da-4ba2-84e4-fd9cab0b313d-kube-api-access-zrqv9\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320613 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e8c367-ba45-416c-ae57-0a143fa11854-config\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320638 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320712 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320909 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-af067a04-de12-4838-a984-ce41c9cc0245\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af067a04-de12-4838-a984-ce41c9cc0245\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.320958 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321018 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597f607f-f6da-4ba2-84e4-fd9cab0b313d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321045 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321079 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321094 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321124 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e8110b83-7721-451f-bd04-f9604fe55620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8110b83-7721-451f-bd04-f9604fe55620\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321183 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vfg2\" (UniqueName: \"kubernetes.io/projected/f4e8c367-ba45-416c-ae57-0a143fa11854-kube-api-access-5vfg2\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321272 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321306 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.321710 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36d32422-038a-496e-9b7f-8616b90efb2b-config\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.324169 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.324213 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e8110b83-7721-451f-bd04-f9604fe55620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8110b83-7721-451f-bd04-f9604fe55620\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eeb5fcefb6f56d228cfbb8f3ed28cdc2486ae7b70acfdf239fb0b3bb36a545bd/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.324697 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.324730 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8dedc85d14c6fa31ccba38d83b14a71816073f8169caddd3acb7686ccd2b4833/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.325241 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.331399 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.337112 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-57b544c797-dvfrt"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.338582 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.341803 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbz67\" (UniqueName: \"kubernetes.io/projected/36d32422-038a-496e-9b7f-8616b90efb2b-kube-api-access-bbz67\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: W1124 21:19:14.349840 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145468bd_4c70_49ef_a013_6cc672232c5e.slice/crio-850edbe3b678be39211bf01778dfb31d3f9bd21df70553e054817f4f0a85efde WatchSource:0}: Error finding container 850edbe3b678be39211bf01778dfb31d3f9bd21df70553e054817f4f0a85efde: Status 404 returned error can't find the container with id 850edbe3b678be39211bf01778dfb31d3f9bd21df70553e054817f4f0a85efde Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.350550 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/36d32422-038a-496e-9b7f-8616b90efb2b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.367674 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e8110b83-7721-451f-bd04-f9604fe55620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8110b83-7721-451f-bd04-f9604fe55620\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.391250 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1641060-7c6d-44e4-a339-bb88d97d708d\") pod \"logging-loki-ingester-0\" (UID: \"36d32422-038a-496e-9b7f-8616b90efb2b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.415152 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-57b544c797-tk7zw"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.422083 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-sldkd"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.422971 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-af067a04-de12-4838-a984-ce41c9cc0245\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af067a04-de12-4838-a984-ce41c9cc0245\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423076 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597f607f-f6da-4ba2-84e4-fd9cab0b313d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423109 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423154 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423182 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423236 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vfg2\" (UniqueName: \"kubernetes.io/projected/f4e8c367-ba45-416c-ae57-0a143fa11854-kube-api-access-5vfg2\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423314 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423348 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423446 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423513 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrqv9\" (UniqueName: \"kubernetes.io/projected/597f607f-f6da-4ba2-84e4-fd9cab0b313d-kube-api-access-zrqv9\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423565 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423604 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e8c367-ba45-416c-ae57-0a143fa11854-config\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.423667 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.425135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.426116 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.426966 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597f607f-f6da-4ba2-84e4-fd9cab0b313d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.428109 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e8c367-ba45-416c-ae57-0a143fa11854-config\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.431396 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.432725 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.432750 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-af067a04-de12-4838-a984-ce41c9cc0245\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af067a04-de12-4838-a984-ce41c9cc0245\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c60cec0dcd3b03246a4b18cabb87751d4caafa0bb6731c6a8e5c857e0c09cf10/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.436959 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.437013 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/311a8837af3477a5b96b4d3758b3f1f125e39ef85d3c51ffd436b0cbbbc108d2/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.438469 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.439034 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.441621 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.441805 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/597f607f-f6da-4ba2-84e4-fd9cab0b313d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.441862 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f4e8c367-ba45-416c-ae57-0a143fa11854-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.444247 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vfg2\" (UniqueName: \"kubernetes.io/projected/f4e8c367-ba45-416c-ae57-0a143fa11854-kube-api-access-5vfg2\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.455631 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrqv9\" (UniqueName: \"kubernetes.io/projected/597f607f-f6da-4ba2-84e4-fd9cab0b313d-kube-api-access-zrqv9\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.462350 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cec2a72-032f-43a1-b5ac-3df1188ba75f\") pod \"logging-loki-compactor-0\" (UID: \"f4e8c367-ba45-416c-ae57-0a143fa11854\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.464287 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-af067a04-de12-4838-a984-ce41c9cc0245\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af067a04-de12-4838-a984-ce41c9cc0245\") pod \"logging-loki-index-gateway-0\" (UID: \"597f607f-f6da-4ba2-84e4-fd9cab0b313d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.594175 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.686959 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.766213 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.807613 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" event={"ID":"3ac04356-36c8-4670-b849-cacb649d9a9a","Type":"ContainerStarted","Data":"9e05551b634fe2823f9db69007b139dc3920303be599b622ec475039a3da8180"} Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.825026 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" event={"ID":"145468bd-4c70-49ef-a013-6cc672232c5e","Type":"ContainerStarted","Data":"850edbe3b678be39211bf01778dfb31d3f9bd21df70553e054817f4f0a85efde"} Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.826419 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" event={"ID":"08034d9d-5888-426f-9a8c-137de45fef21","Type":"ContainerStarted","Data":"b52a131376fc7e518bf224e3aabeb182cc76334b866b782d0886fee9d32445ea"} Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.828314 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.829119 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" event={"ID":"f3696f2a-d943-47e9-b634-49e7293e64db","Type":"ContainerStarted","Data":"3c1b1882e7a7af23cc6c27e41658ed5ad10eb8719b037bd98c102497b813fa54"} Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.830926 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" event={"ID":"0196a92d-1cc8-4f72-922e-692ca28b2d88","Type":"ContainerStarted","Data":"8c194eb1a72a620bb778441484d345a7b05650d31c3fc949ef2858cfbad3606a"} Nov 24 21:19:14 crc kubenswrapper[4801]: I1124 21:19:14.926825 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 21:19:15 crc kubenswrapper[4801]: I1124 21:19:15.009567 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 21:19:15 crc kubenswrapper[4801]: I1124 21:19:15.840314 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"36d32422-038a-496e-9b7f-8616b90efb2b","Type":"ContainerStarted","Data":"38e8ca19e0a6c917c747473b76a73ad22c9e6201cd0474aa5820adc44cd6414f"} Nov 24 21:19:15 crc kubenswrapper[4801]: I1124 21:19:15.843071 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"597f607f-f6da-4ba2-84e4-fd9cab0b313d","Type":"ContainerStarted","Data":"b354bd650c144ecc0d44a40eff60322a24b86060d5d8d9f252831dd5bc0e3fa3"} Nov 24 21:19:15 crc kubenswrapper[4801]: I1124 21:19:15.844330 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"f4e8c367-ba45-416c-ae57-0a143fa11854","Type":"ContainerStarted","Data":"837b249bdff4282c1cec9ab4c66938ada5abbcb6d25e698d297fa0c497784cc7"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.873489 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" event={"ID":"08034d9d-5888-426f-9a8c-137de45fef21","Type":"ContainerStarted","Data":"d43b39bf97f9f5895f975e593eacd2b7935ec8888d2ed1fab7998ebf797da56d"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.875614 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.876752 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"f4e8c367-ba45-416c-ae57-0a143fa11854","Type":"ContainerStarted","Data":"52dca11054d74b172cbb403451f7dd1a1b644e3b9f09557d4bbe63d6f36f7e4e"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.877762 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.880051 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"36d32422-038a-496e-9b7f-8616b90efb2b","Type":"ContainerStarted","Data":"2b5745fa1452a93b42f8adb0ea0ccb137f60e42e0abb853c78dbc7a9acaa8cde"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.880516 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.881884 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" event={"ID":"f3696f2a-d943-47e9-b634-49e7293e64db","Type":"ContainerStarted","Data":"edf5b1d7f46b17d94a6a5ba0bc1950f1940f380a7721ba4a100581ee7373bb2c"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.882297 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.883240 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"597f607f-f6da-4ba2-84e4-fd9cab0b313d","Type":"ContainerStarted","Data":"79e0206211e373f64eb9b582447c284103de38b8338f77a8b254e6993872ccdf"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.883660 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.885804 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" event={"ID":"0196a92d-1cc8-4f72-922e-692ca28b2d88","Type":"ContainerStarted","Data":"e28cfeb8b07d526e60d908d2bd8710cc04235f2be7cfb6c3bcc24343f3759d85"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.886177 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.887349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" event={"ID":"3ac04356-36c8-4670-b849-cacb649d9a9a","Type":"ContainerStarted","Data":"70b38b43ce7cc50b54815e9c4a62503f539ae8e1197e0466e026cfc0b3783a7c"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.888646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" event={"ID":"145468bd-4c70-49ef-a013-6cc672232c5e","Type":"ContainerStarted","Data":"467407874abe8ebbbdb99da0c0dead5963592f003252641b5b5659894181b110"} Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.898660 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" podStartSLOduration=2.660251292 podStartE2EDuration="6.898626292s" podCreationTimestamp="2025-11-24 21:19:12 +0000 UTC" firstStartedPulling="2025-11-24 21:19:13.882942368 +0000 UTC m=+725.965529038" lastFinishedPulling="2025-11-24 21:19:18.121317368 +0000 UTC m=+730.203904038" observedRunningTime="2025-11-24 21:19:18.89533609 +0000 UTC m=+730.977922750" watchObservedRunningTime="2025-11-24 21:19:18.898626292 +0000 UTC m=+730.981212962" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.924118 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" podStartSLOduration=2.148648634 podStartE2EDuration="5.924086151s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:14.34422414 +0000 UTC m=+726.426810810" lastFinishedPulling="2025-11-24 21:19:18.119661657 +0000 UTC m=+730.202248327" observedRunningTime="2025-11-24 21:19:18.916095443 +0000 UTC m=+730.998682133" watchObservedRunningTime="2025-11-24 21:19:18.924086151 +0000 UTC m=+731.006672861" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.951966 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.735635501 podStartE2EDuration="5.951936164s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:14.860670662 +0000 UTC m=+726.943257332" lastFinishedPulling="2025-11-24 21:19:18.076971325 +0000 UTC m=+730.159557995" observedRunningTime="2025-11-24 21:19:18.946304029 +0000 UTC m=+731.028890709" watchObservedRunningTime="2025-11-24 21:19:18.951936164 +0000 UTC m=+731.034522844" Nov 24 21:19:18 crc kubenswrapper[4801]: I1124 21:19:18.971699 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.9011057879999997 podStartE2EDuration="5.971663985s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:15.02070683 +0000 UTC m=+727.103293500" lastFinishedPulling="2025-11-24 21:19:18.091265027 +0000 UTC m=+730.173851697" observedRunningTime="2025-11-24 21:19:18.965481193 +0000 UTC m=+731.048067863" watchObservedRunningTime="2025-11-24 21:19:18.971663985 +0000 UTC m=+731.054250695" Nov 24 21:19:19 crc kubenswrapper[4801]: I1124 21:19:19.001203 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.8714732 podStartE2EDuration="6.0011856s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:14.934736016 +0000 UTC m=+727.017322686" lastFinishedPulling="2025-11-24 21:19:18.064448386 +0000 UTC m=+730.147035086" observedRunningTime="2025-11-24 21:19:19.000882631 +0000 UTC m=+731.083469311" watchObservedRunningTime="2025-11-24 21:19:19.0011856 +0000 UTC m=+731.083772270" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.908463 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" event={"ID":"145468bd-4c70-49ef-a013-6cc672232c5e","Type":"ContainerStarted","Data":"ddb049d4baf23660515e5379e5641f7618ce07a7ebf9a3c4cb5cba653c644f05"} Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.909822 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.909888 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.911540 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" event={"ID":"3ac04356-36c8-4670-b849-cacb649d9a9a","Type":"ContainerStarted","Data":"288c507ed29c2f78f229931b7875a0156343ce3aa736ed7b559b1a3f55cbf5de"} Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.913032 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.913098 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.921447 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.921963 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.931932 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" podStartSLOduration=4.228893638 podStartE2EDuration="7.931902651s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:14.418873973 +0000 UTC m=+726.501460643" lastFinishedPulling="2025-11-24 21:19:18.121882986 +0000 UTC m=+730.204469656" observedRunningTime="2025-11-24 21:19:19.036217315 +0000 UTC m=+731.118803985" watchObservedRunningTime="2025-11-24 21:19:20.931902651 +0000 UTC m=+733.014489341" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.934661 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.937324 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" podStartSLOduration=1.6552628569999999 podStartE2EDuration="7.937292667s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:14.352421884 +0000 UTC m=+726.435008554" lastFinishedPulling="2025-11-24 21:19:20.634451694 +0000 UTC m=+732.717038364" observedRunningTime="2025-11-24 21:19:20.93219141 +0000 UTC m=+733.014778130" watchObservedRunningTime="2025-11-24 21:19:20.937292667 +0000 UTC m=+733.019879337" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.940903 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-57b544c797-dvfrt" Nov 24 21:19:20 crc kubenswrapper[4801]: I1124 21:19:20.960401 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-57b544c797-tk7zw" podStartSLOduration=1.748391103 podStartE2EDuration="7.960368393s" podCreationTimestamp="2025-11-24 21:19:13 +0000 UTC" firstStartedPulling="2025-11-24 21:19:14.420482443 +0000 UTC m=+726.503069113" lastFinishedPulling="2025-11-24 21:19:20.632459733 +0000 UTC m=+732.715046403" observedRunningTime="2025-11-24 21:19:20.95800514 +0000 UTC m=+733.040591850" watchObservedRunningTime="2025-11-24 21:19:20.960368393 +0000 UTC m=+733.042955063" Nov 24 21:19:24 crc kubenswrapper[4801]: I1124 21:19:24.320583 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:19:24 crc kubenswrapper[4801]: I1124 21:19:24.321068 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:19:33 crc kubenswrapper[4801]: I1124 21:19:33.297872 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-xp22s" Nov 24 21:19:33 crc kubenswrapper[4801]: I1124 21:19:33.574264 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-gw76f" Nov 24 21:19:33 crc kubenswrapper[4801]: I1124 21:19:33.718419 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-sldkd" Nov 24 21:19:34 crc kubenswrapper[4801]: I1124 21:19:34.606750 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:19:34 crc kubenswrapper[4801]: I1124 21:19:34.698346 4801 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 24 21:19:34 crc kubenswrapper[4801]: I1124 21:19:34.698469 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="36d32422-038a-496e-9b7f-8616b90efb2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:19:34 crc kubenswrapper[4801]: I1124 21:19:34.775016 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:19:36 crc kubenswrapper[4801]: I1124 21:19:36.435429 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-flzk9"] Nov 24 21:19:36 crc kubenswrapper[4801]: I1124 21:19:36.436138 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" podUID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" containerName="controller-manager" containerID="cri-o://4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d" gracePeriod=30 Nov 24 21:19:36 crc kubenswrapper[4801]: I1124 21:19:36.532889 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm"] Nov 24 21:19:36 crc kubenswrapper[4801]: I1124 21:19:36.533124 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" podUID="63c79716-9856-4a80-bc3e-8d016e1bfc97" containerName="route-controller-manager" containerID="cri-o://a5b5775549f8356f4ccd1091965809faa45022b5d9f6611f12d0e777c007f709" gracePeriod=30 Nov 24 21:19:36 crc kubenswrapper[4801]: I1124 21:19:36.984741 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.035717 4801 generic.go:334] "Generic (PLEG): container finished" podID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" containerID="4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d" exitCode=0 Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.035801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" event={"ID":"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4","Type":"ContainerDied","Data":"4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d"} Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.035887 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" event={"ID":"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4","Type":"ContainerDied","Data":"8c0ff659b1faa7b3cbf616f42f0b02a8129f502e20bd250c9abe250bb6af7704"} Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.035916 4801 scope.go:117] "RemoveContainer" containerID="4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.036071 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-flzk9" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.039436 4801 generic.go:334] "Generic (PLEG): container finished" podID="63c79716-9856-4a80-bc3e-8d016e1bfc97" containerID="a5b5775549f8356f4ccd1091965809faa45022b5d9f6611f12d0e777c007f709" exitCode=0 Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.039496 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" event={"ID":"63c79716-9856-4a80-bc3e-8d016e1bfc97","Type":"ContainerDied","Data":"a5b5775549f8356f4ccd1091965809faa45022b5d9f6611f12d0e777c007f709"} Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.056948 4801 scope.go:117] "RemoveContainer" containerID="4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d" Nov 24 21:19:37 crc kubenswrapper[4801]: E1124 21:19:37.057542 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d\": container with ID starting with 4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d not found: ID does not exist" containerID="4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.057577 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d"} err="failed to get container status \"4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d\": rpc error: code = NotFound desc = could not find container \"4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d\": container with ID starting with 4c860047e5f4c04f4ac7c7832906eecbb7a291bcc7c893c44b28193eb8ef082d not found: ID does not exist" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.094459 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-serving-cert\") pod \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.094543 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-config\") pod \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.094654 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9bhg\" (UniqueName: \"kubernetes.io/projected/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-kube-api-access-n9bhg\") pod \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.094680 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-client-ca\") pod \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.094777 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-proxy-ca-bundles\") pod \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\" (UID: \"a9c6c44d-dd82-4fb9-99e4-ab1f584909f4\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.096119 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" (UID: "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.096154 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" (UID: "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.096583 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.096843 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-config" (OuterVolumeSpecName: "config") pod "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" (UID: "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.110674 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" (UID: "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.120916 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-kube-api-access-n9bhg" (OuterVolumeSpecName: "kube-api-access-n9bhg") pod "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" (UID: "a9c6c44d-dd82-4fb9-99e4-ab1f584909f4"). InnerVolumeSpecName "kube-api-access-n9bhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.196313 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-client-ca\") pod \"63c79716-9856-4a80-bc3e-8d016e1bfc97\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.196360 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-config\") pod \"63c79716-9856-4a80-bc3e-8d016e1bfc97\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.196558 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmrgc\" (UniqueName: \"kubernetes.io/projected/63c79716-9856-4a80-bc3e-8d016e1bfc97-kube-api-access-lmrgc\") pod \"63c79716-9856-4a80-bc3e-8d016e1bfc97\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.196639 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c79716-9856-4a80-bc3e-8d016e1bfc97-serving-cert\") pod \"63c79716-9856-4a80-bc3e-8d016e1bfc97\" (UID: \"63c79716-9856-4a80-bc3e-8d016e1bfc97\") " Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.197001 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9bhg\" (UniqueName: \"kubernetes.io/projected/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-kube-api-access-n9bhg\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.197023 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.197035 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.197046 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.197058 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.197594 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-client-ca" (OuterVolumeSpecName: "client-ca") pod "63c79716-9856-4a80-bc3e-8d016e1bfc97" (UID: "63c79716-9856-4a80-bc3e-8d016e1bfc97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.198005 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-config" (OuterVolumeSpecName: "config") pod "63c79716-9856-4a80-bc3e-8d016e1bfc97" (UID: "63c79716-9856-4a80-bc3e-8d016e1bfc97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.200861 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c79716-9856-4a80-bc3e-8d016e1bfc97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63c79716-9856-4a80-bc3e-8d016e1bfc97" (UID: "63c79716-9856-4a80-bc3e-8d016e1bfc97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.201825 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c79716-9856-4a80-bc3e-8d016e1bfc97-kube-api-access-lmrgc" (OuterVolumeSpecName: "kube-api-access-lmrgc") pod "63c79716-9856-4a80-bc3e-8d016e1bfc97" (UID: "63c79716-9856-4a80-bc3e-8d016e1bfc97"). InnerVolumeSpecName "kube-api-access-lmrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.299221 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.299275 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63c79716-9856-4a80-bc3e-8d016e1bfc97-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.299294 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmrgc\" (UniqueName: \"kubernetes.io/projected/63c79716-9856-4a80-bc3e-8d016e1bfc97-kube-api-access-lmrgc\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.299309 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c79716-9856-4a80-bc3e-8d016e1bfc97-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.378789 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-flzk9"] Nov 24 21:19:37 crc kubenswrapper[4801]: I1124 21:19:37.384495 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-flzk9"] Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.053703 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" event={"ID":"63c79716-9856-4a80-bc3e-8d016e1bfc97","Type":"ContainerDied","Data":"af34ce972fe038d580380d5e33eba331f5f3181f4af6c6d5710c8d3b3207e204"} Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.054282 4801 scope.go:117] "RemoveContainer" containerID="a5b5775549f8356f4ccd1091965809faa45022b5d9f6611f12d0e777c007f709" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.053951 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.117199 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2"] Nov 24 21:19:38 crc kubenswrapper[4801]: E1124 21:19:38.117629 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" containerName="controller-manager" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.117652 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" containerName="controller-manager" Nov 24 21:19:38 crc kubenswrapper[4801]: E1124 21:19:38.117684 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c79716-9856-4a80-bc3e-8d016e1bfc97" containerName="route-controller-manager" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.117691 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c79716-9856-4a80-bc3e-8d016e1bfc97" containerName="route-controller-manager" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.117846 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" containerName="controller-manager" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.117864 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c79716-9856-4a80-bc3e-8d016e1bfc97" containerName="route-controller-manager" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.121607 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.126448 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.126732 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.126839 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.128025 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.128356 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.129034 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.133588 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df556b585-xqslk"] Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.154322 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm"] Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.154492 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.157652 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbkxm"] Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.158330 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.158657 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.159128 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.161697 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.161808 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.162493 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2"] Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.162935 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.167072 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df556b585-xqslk"] Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.173961 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.217982 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-client-ca\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.218206 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de3e0ba-5b95-416a-9202-d882c4792aec-serving-cert\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.218304 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqwxd\" (UniqueName: \"kubernetes.io/projected/4de3e0ba-5b95-416a-9202-d882c4792aec-kube-api-access-rqwxd\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.218530 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-config\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.247073 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2"] Nov 24 21:19:38 crc kubenswrapper[4801]: E1124 21:19:38.247742 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-rqwxd serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" podUID="4de3e0ba-5b95-416a-9202-d882c4792aec" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.320500 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-serving-cert\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.320572 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-config\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.320798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-client-ca\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.320868 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de3e0ba-5b95-416a-9202-d882c4792aec-serving-cert\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.321014 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4vb\" (UniqueName: \"kubernetes.io/projected/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-kube-api-access-ch4vb\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.321075 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqwxd\" (UniqueName: \"kubernetes.io/projected/4de3e0ba-5b95-416a-9202-d882c4792aec-kube-api-access-rqwxd\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.321109 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-proxy-ca-bundles\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.321162 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-client-ca\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.321187 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-config\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.322347 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-client-ca\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.322874 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-config\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.335044 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de3e0ba-5b95-416a-9202-d882c4792aec-serving-cert\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.338426 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqwxd\" (UniqueName: \"kubernetes.io/projected/4de3e0ba-5b95-416a-9202-d882c4792aec-kube-api-access-rqwxd\") pod \"route-controller-manager-979b7f7c5-865j2\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.423722 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4vb\" (UniqueName: \"kubernetes.io/projected/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-kube-api-access-ch4vb\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.423850 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-proxy-ca-bundles\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.423919 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-client-ca\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.423987 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-serving-cert\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.424053 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-config\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.425210 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-client-ca\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.425873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-proxy-ca-bundles\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.426618 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-config\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.428989 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-serving-cert\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.445949 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4vb\" (UniqueName: \"kubernetes.io/projected/16e289fa-8cb2-4d7d-a69f-27e0840cc97d-kube-api-access-ch4vb\") pod \"controller-manager-6df556b585-xqslk\" (UID: \"16e289fa-8cb2-4d7d-a69f-27e0840cc97d\") " pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.486863 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.680860 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c79716-9856-4a80-bc3e-8d016e1bfc97" path="/var/lib/kubelet/pods/63c79716-9856-4a80-bc3e-8d016e1bfc97/volumes" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.682126 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c6c44d-dd82-4fb9-99e4-ab1f584909f4" path="/var/lib/kubelet/pods/a9c6c44d-dd82-4fb9-99e4-ab1f584909f4/volumes" Nov 24 21:19:38 crc kubenswrapper[4801]: I1124 21:19:38.983021 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df556b585-xqslk"] Nov 24 21:19:38 crc kubenswrapper[4801]: W1124 21:19:38.998688 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e289fa_8cb2_4d7d_a69f_27e0840cc97d.slice/crio-8a3f1819fead280270023271f7720fe848499eacb06a7eb2ceb6cebc1ae6bf0a WatchSource:0}: Error finding container 8a3f1819fead280270023271f7720fe848499eacb06a7eb2ceb6cebc1ae6bf0a: Status 404 returned error can't find the container with id 8a3f1819fead280270023271f7720fe848499eacb06a7eb2ceb6cebc1ae6bf0a Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.062793 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" event={"ID":"16e289fa-8cb2-4d7d-a69f-27e0840cc97d","Type":"ContainerStarted","Data":"8a3f1819fead280270023271f7720fe848499eacb06a7eb2ceb6cebc1ae6bf0a"} Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.063975 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.099127 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.242187 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de3e0ba-5b95-416a-9202-d882c4792aec-serving-cert\") pod \"4de3e0ba-5b95-416a-9202-d882c4792aec\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.242277 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-client-ca\") pod \"4de3e0ba-5b95-416a-9202-d882c4792aec\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.242387 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-config\") pod \"4de3e0ba-5b95-416a-9202-d882c4792aec\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.242416 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqwxd\" (UniqueName: \"kubernetes.io/projected/4de3e0ba-5b95-416a-9202-d882c4792aec-kube-api-access-rqwxd\") pod \"4de3e0ba-5b95-416a-9202-d882c4792aec\" (UID: \"4de3e0ba-5b95-416a-9202-d882c4792aec\") " Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.243136 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-client-ca" (OuterVolumeSpecName: "client-ca") pod "4de3e0ba-5b95-416a-9202-d882c4792aec" (UID: "4de3e0ba-5b95-416a-9202-d882c4792aec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.244315 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-config" (OuterVolumeSpecName: "config") pod "4de3e0ba-5b95-416a-9202-d882c4792aec" (UID: "4de3e0ba-5b95-416a-9202-d882c4792aec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.248947 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de3e0ba-5b95-416a-9202-d882c4792aec-kube-api-access-rqwxd" (OuterVolumeSpecName: "kube-api-access-rqwxd") pod "4de3e0ba-5b95-416a-9202-d882c4792aec" (UID: "4de3e0ba-5b95-416a-9202-d882c4792aec"). InnerVolumeSpecName "kube-api-access-rqwxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.250054 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de3e0ba-5b95-416a-9202-d882c4792aec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4de3e0ba-5b95-416a-9202-d882c4792aec" (UID: "4de3e0ba-5b95-416a-9202-d882c4792aec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.345074 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de3e0ba-5b95-416a-9202-d882c4792aec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.345144 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.345155 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3e0ba-5b95-416a-9202-d882c4792aec-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:39 crc kubenswrapper[4801]: I1124 21:19:39.345165 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqwxd\" (UniqueName: \"kubernetes.io/projected/4de3e0ba-5b95-416a-9202-d882c4792aec-kube-api-access-rqwxd\") on node \"crc\" DevicePath \"\"" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.072849 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.072895 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" event={"ID":"16e289fa-8cb2-4d7d-a69f-27e0840cc97d","Type":"ContainerStarted","Data":"2cb0753ec34f4a73a003641c90e658ff0c77a434ce3b0da1666cae94af99dcea"} Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.074056 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.090245 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.113675 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6df556b585-xqslk" podStartSLOduration=4.113644579 podStartE2EDuration="4.113644579s" podCreationTimestamp="2025-11-24 21:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:19:40.108577273 +0000 UTC m=+752.191163953" watchObservedRunningTime="2025-11-24 21:19:40.113644579 +0000 UTC m=+752.196231249" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.155530 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v"] Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.156869 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.160924 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.161248 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.161499 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.164305 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.164336 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.164487 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.173544 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2"] Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.189505 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-979b7f7c5-865j2"] Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.193465 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v"] Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.264466 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43883077-c1b5-4fae-a576-d76dd4c49dba-config\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.264556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43883077-c1b5-4fae-a576-d76dd4c49dba-client-ca\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.264674 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43883077-c1b5-4fae-a576-d76dd4c49dba-serving-cert\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.264727 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgz4h\" (UniqueName: \"kubernetes.io/projected/43883077-c1b5-4fae-a576-d76dd4c49dba-kube-api-access-cgz4h\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.365955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgz4h\" (UniqueName: \"kubernetes.io/projected/43883077-c1b5-4fae-a576-d76dd4c49dba-kube-api-access-cgz4h\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.366082 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43883077-c1b5-4fae-a576-d76dd4c49dba-config\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.366120 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43883077-c1b5-4fae-a576-d76dd4c49dba-client-ca\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.366157 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43883077-c1b5-4fae-a576-d76dd4c49dba-serving-cert\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.367921 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43883077-c1b5-4fae-a576-d76dd4c49dba-config\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.368259 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43883077-c1b5-4fae-a576-d76dd4c49dba-client-ca\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.372676 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43883077-c1b5-4fae-a576-d76dd4c49dba-serving-cert\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.384222 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgz4h\" (UniqueName: \"kubernetes.io/projected/43883077-c1b5-4fae-a576-d76dd4c49dba-kube-api-access-cgz4h\") pod \"route-controller-manager-847f9d6c96-dbx6v\" (UID: \"43883077-c1b5-4fae-a576-d76dd4c49dba\") " pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.474594 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.675789 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de3e0ba-5b95-416a-9202-d882c4792aec" path="/var/lib/kubelet/pods/4de3e0ba-5b95-416a-9202-d882c4792aec/volumes" Nov 24 21:19:40 crc kubenswrapper[4801]: I1124 21:19:40.751266 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v"] Nov 24 21:19:40 crc kubenswrapper[4801]: W1124 21:19:40.755994 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43883077_c1b5_4fae_a576_d76dd4c49dba.slice/crio-47c37af2f4e5f4369c01a9e6f3e2ed2d00c2bb7558dc2ab3d87ef14f94430dc9 WatchSource:0}: Error finding container 47c37af2f4e5f4369c01a9e6f3e2ed2d00c2bb7558dc2ab3d87ef14f94430dc9: Status 404 returned error can't find the container with id 47c37af2f4e5f4369c01a9e6f3e2ed2d00c2bb7558dc2ab3d87ef14f94430dc9 Nov 24 21:19:41 crc kubenswrapper[4801]: I1124 21:19:41.088808 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" event={"ID":"43883077-c1b5-4fae-a576-d76dd4c49dba","Type":"ContainerStarted","Data":"e7f8a2118a835d8e0dd0406e7be49d61c23a6537b7a0f7f92b4d2e6b907faf14"} Nov 24 21:19:41 crc kubenswrapper[4801]: I1124 21:19:41.089271 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" event={"ID":"43883077-c1b5-4fae-a576-d76dd4c49dba","Type":"ContainerStarted","Data":"47c37af2f4e5f4369c01a9e6f3e2ed2d00c2bb7558dc2ab3d87ef14f94430dc9"} Nov 24 21:19:42 crc kubenswrapper[4801]: I1124 21:19:42.102778 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:42 crc kubenswrapper[4801]: I1124 21:19:42.108907 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" Nov 24 21:19:42 crc kubenswrapper[4801]: I1124 21:19:42.165394 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-847f9d6c96-dbx6v" podStartSLOduration=4.165331798 podStartE2EDuration="4.165331798s" podCreationTimestamp="2025-11-24 21:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:19:41.129921347 +0000 UTC m=+753.212508027" watchObservedRunningTime="2025-11-24 21:19:42.165331798 +0000 UTC m=+754.247918488" Nov 24 21:19:44 crc kubenswrapper[4801]: I1124 21:19:44.697892 4801 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 24 21:19:44 crc kubenswrapper[4801]: I1124 21:19:44.698547 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="36d32422-038a-496e-9b7f-8616b90efb2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:19:47 crc kubenswrapper[4801]: I1124 21:19:47.059603 4801 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.629675 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-659kw"] Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.632309 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.654730 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-659kw"] Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.705444 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4nx\" (UniqueName: \"kubernetes.io/projected/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-kube-api-access-xw4nx\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.705513 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-catalog-content\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.705579 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-utilities\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.807210 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4nx\" (UniqueName: \"kubernetes.io/projected/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-kube-api-access-xw4nx\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.807633 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-catalog-content\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.808253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-catalog-content\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.808990 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-utilities\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.808694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-utilities\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.837740 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4nx\" (UniqueName: \"kubernetes.io/projected/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-kube-api-access-xw4nx\") pod \"redhat-marketplace-659kw\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:50 crc kubenswrapper[4801]: I1124 21:19:50.974433 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:19:51 crc kubenswrapper[4801]: I1124 21:19:51.426589 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-659kw"] Nov 24 21:19:52 crc kubenswrapper[4801]: I1124 21:19:52.194075 4801 generic.go:334] "Generic (PLEG): container finished" podID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerID="bb7a4c8fc631950a96d186d177c1216adeb44709eb49564436065cdfcbbf855d" exitCode=0 Nov 24 21:19:52 crc kubenswrapper[4801]: I1124 21:19:52.194156 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerDied","Data":"bb7a4c8fc631950a96d186d177c1216adeb44709eb49564436065cdfcbbf855d"} Nov 24 21:19:52 crc kubenswrapper[4801]: I1124 21:19:52.194513 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerStarted","Data":"e7ebc0a1a205979b241e1e2357606ec8da2b9b048f56c3434c9c0b4d24d43716"} Nov 24 21:19:53 crc kubenswrapper[4801]: I1124 21:19:53.205810 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerStarted","Data":"66d053cd7565b1c4974e2b35fa72a1282953eb9a9ff78a2231197f0262b80389"} Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.217459 4801 generic.go:334] "Generic (PLEG): container finished" podID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerID="66d053cd7565b1c4974e2b35fa72a1282953eb9a9ff78a2231197f0262b80389" exitCode=0 Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.217516 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerDied","Data":"66d053cd7565b1c4974e2b35fa72a1282953eb9a9ff78a2231197f0262b80389"} Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.320712 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.321092 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.321476 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.322841 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a0d6aeedfbe81cd46691ce54a719da9e45f8039e5616fd23cf3d03c59f4c218"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.322983 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://9a0d6aeedfbe81cd46691ce54a719da9e45f8039e5616fd23cf3d03c59f4c218" gracePeriod=600 Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.696075 4801 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 24 21:19:54 crc kubenswrapper[4801]: I1124 21:19:54.696580 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="36d32422-038a-496e-9b7f-8616b90efb2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.228968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerStarted","Data":"ff0b15a10f4e171dbdd3f84c9d261af43b7122abaee9ae26f3dee55a4afb3d26"} Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.232979 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="9a0d6aeedfbe81cd46691ce54a719da9e45f8039e5616fd23cf3d03c59f4c218" exitCode=0 Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.233030 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"9a0d6aeedfbe81cd46691ce54a719da9e45f8039e5616fd23cf3d03c59f4c218"} Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.233061 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"8919cd9122ea8e468b9aac6663ba78df15883fda40442e460d8b6e6a81f4e98c"} Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.233088 4801 scope.go:117] "RemoveContainer" containerID="4a2c1c8c89f37badccd577363e8b4343fdabe58545188aa55faababfb8eeb353" Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.257584 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-659kw" podStartSLOduration=2.570521288 podStartE2EDuration="5.257560873s" podCreationTimestamp="2025-11-24 21:19:50 +0000 UTC" firstStartedPulling="2025-11-24 21:19:52.196667096 +0000 UTC m=+764.279253776" lastFinishedPulling="2025-11-24 21:19:54.883706661 +0000 UTC m=+766.966293361" observedRunningTime="2025-11-24 21:19:55.250784683 +0000 UTC m=+767.333371373" watchObservedRunningTime="2025-11-24 21:19:55.257560873 +0000 UTC m=+767.340147543" Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.796595 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-csp2w"] Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.799424 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.815301 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-csp2w"] Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.899009 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-utilities\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.899227 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-catalog-content\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:55 crc kubenswrapper[4801]: I1124 21:19:55.899283 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxz8\" (UniqueName: \"kubernetes.io/projected/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-kube-api-access-7dxz8\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.001518 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-utilities\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.001607 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-catalog-content\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.001643 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxz8\" (UniqueName: \"kubernetes.io/projected/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-kube-api-access-7dxz8\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.005304 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-utilities\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.005568 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-catalog-content\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.027420 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxz8\" (UniqueName: \"kubernetes.io/projected/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-kube-api-access-7dxz8\") pod \"redhat-operators-csp2w\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.116606 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:19:56 crc kubenswrapper[4801]: I1124 21:19:56.603617 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-csp2w"] Nov 24 21:19:57 crc kubenswrapper[4801]: I1124 21:19:57.276511 4801 generic.go:334] "Generic (PLEG): container finished" podID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerID="cda2b00e6d4b5186125564439912430c6a4c44a568f43bc1de9418e4e600fc35" exitCode=0 Nov 24 21:19:57 crc kubenswrapper[4801]: I1124 21:19:57.276636 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerDied","Data":"cda2b00e6d4b5186125564439912430c6a4c44a568f43bc1de9418e4e600fc35"} Nov 24 21:19:57 crc kubenswrapper[4801]: I1124 21:19:57.277307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerStarted","Data":"9d2729fc9dc42734e4a497940aed92563afd013798ad65dde183dc249896fe6a"} Nov 24 21:19:58 crc kubenswrapper[4801]: I1124 21:19:58.305089 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerStarted","Data":"146f51ee9e327563a7255071d088db9cfa87c477c63871ff389f6cffd8f54a48"} Nov 24 21:19:59 crc kubenswrapper[4801]: I1124 21:19:59.316428 4801 generic.go:334] "Generic (PLEG): container finished" podID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerID="146f51ee9e327563a7255071d088db9cfa87c477c63871ff389f6cffd8f54a48" exitCode=0 Nov 24 21:19:59 crc kubenswrapper[4801]: I1124 21:19:59.316498 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerDied","Data":"146f51ee9e327563a7255071d088db9cfa87c477c63871ff389f6cffd8f54a48"} Nov 24 21:20:00 crc kubenswrapper[4801]: I1124 21:20:00.328087 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerStarted","Data":"133defd93d13371d3885077d675e31a89fa07e2d36f551de1372f7f2c59fc7d3"} Nov 24 21:20:00 crc kubenswrapper[4801]: I1124 21:20:00.348959 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-csp2w" podStartSLOduration=2.8798151 podStartE2EDuration="5.348933432s" podCreationTimestamp="2025-11-24 21:19:55 +0000 UTC" firstStartedPulling="2025-11-24 21:19:57.27917269 +0000 UTC m=+769.361759360" lastFinishedPulling="2025-11-24 21:19:59.748291022 +0000 UTC m=+771.830877692" observedRunningTime="2025-11-24 21:20:00.344612899 +0000 UTC m=+772.427199579" watchObservedRunningTime="2025-11-24 21:20:00.348933432 +0000 UTC m=+772.431520132" Nov 24 21:20:00 crc kubenswrapper[4801]: I1124 21:20:00.975237 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:20:00 crc kubenswrapper[4801]: I1124 21:20:00.975313 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:20:01 crc kubenswrapper[4801]: I1124 21:20:01.021225 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:20:01 crc kubenswrapper[4801]: I1124 21:20:01.373084 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:20:03 crc kubenswrapper[4801]: I1124 21:20:03.180516 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-659kw"] Nov 24 21:20:03 crc kubenswrapper[4801]: I1124 21:20:03.355515 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-659kw" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="registry-server" containerID="cri-o://ff0b15a10f4e171dbdd3f84c9d261af43b7122abaee9ae26f3dee55a4afb3d26" gracePeriod=2 Nov 24 21:20:04 crc kubenswrapper[4801]: I1124 21:20:04.694956 4801 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 24 21:20:04 crc kubenswrapper[4801]: I1124 21:20:04.695425 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="36d32422-038a-496e-9b7f-8616b90efb2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:20:05 crc kubenswrapper[4801]: I1124 21:20:05.373355 4801 generic.go:334] "Generic (PLEG): container finished" podID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerID="ff0b15a10f4e171dbdd3f84c9d261af43b7122abaee9ae26f3dee55a4afb3d26" exitCode=0 Nov 24 21:20:05 crc kubenswrapper[4801]: I1124 21:20:05.373404 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerDied","Data":"ff0b15a10f4e171dbdd3f84c9d261af43b7122abaee9ae26f3dee55a4afb3d26"} Nov 24 21:20:05 crc kubenswrapper[4801]: I1124 21:20:05.967237 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.115238 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-catalog-content\") pod \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.115449 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-utilities\") pod \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.115526 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw4nx\" (UniqueName: \"kubernetes.io/projected/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-kube-api-access-xw4nx\") pod \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\" (UID: \"9d4b840f-3776-4550-b1e0-ec9ab4cbacda\") " Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.116979 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-utilities" (OuterVolumeSpecName: "utilities") pod "9d4b840f-3776-4550-b1e0-ec9ab4cbacda" (UID: "9d4b840f-3776-4550-b1e0-ec9ab4cbacda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.117220 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.117277 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.124582 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-kube-api-access-xw4nx" (OuterVolumeSpecName: "kube-api-access-xw4nx") pod "9d4b840f-3776-4550-b1e0-ec9ab4cbacda" (UID: "9d4b840f-3776-4550-b1e0-ec9ab4cbacda"). InnerVolumeSpecName "kube-api-access-xw4nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.136663 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d4b840f-3776-4550-b1e0-ec9ab4cbacda" (UID: "9d4b840f-3776-4550-b1e0-ec9ab4cbacda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.173335 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.218242 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.218289 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw4nx\" (UniqueName: \"kubernetes.io/projected/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-kube-api-access-xw4nx\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.218306 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4b840f-3776-4550-b1e0-ec9ab4cbacda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.382587 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-659kw" event={"ID":"9d4b840f-3776-4550-b1e0-ec9ab4cbacda","Type":"ContainerDied","Data":"e7ebc0a1a205979b241e1e2357606ec8da2b9b048f56c3434c9c0b4d24d43716"} Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.382660 4801 scope.go:117] "RemoveContainer" containerID="ff0b15a10f4e171dbdd3f84c9d261af43b7122abaee9ae26f3dee55a4afb3d26" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.382615 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-659kw" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.407435 4801 scope.go:117] "RemoveContainer" containerID="66d053cd7565b1c4974e2b35fa72a1282953eb9a9ff78a2231197f0262b80389" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.436192 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-659kw"] Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.444801 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-659kw"] Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.460743 4801 scope.go:117] "RemoveContainer" containerID="bb7a4c8fc631950a96d186d177c1216adeb44709eb49564436065cdfcbbf855d" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.461142 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:20:06 crc kubenswrapper[4801]: I1124 21:20:06.672337 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" path="/var/lib/kubelet/pods/9d4b840f-3776-4550-b1e0-ec9ab4cbacda/volumes" Nov 24 21:20:08 crc kubenswrapper[4801]: I1124 21:20:08.607121 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-csp2w"] Nov 24 21:20:08 crc kubenswrapper[4801]: I1124 21:20:08.610899 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-csp2w" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="registry-server" containerID="cri-o://133defd93d13371d3885077d675e31a89fa07e2d36f551de1372f7f2c59fc7d3" gracePeriod=2 Nov 24 21:20:09 crc kubenswrapper[4801]: I1124 21:20:09.433215 4801 generic.go:334] "Generic (PLEG): container finished" podID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerID="133defd93d13371d3885077d675e31a89fa07e2d36f551de1372f7f2c59fc7d3" exitCode=0 Nov 24 21:20:09 crc kubenswrapper[4801]: I1124 21:20:09.433301 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerDied","Data":"133defd93d13371d3885077d675e31a89fa07e2d36f551de1372f7f2c59fc7d3"} Nov 24 21:20:09 crc kubenswrapper[4801]: I1124 21:20:09.827432 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.004558 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-utilities\") pod \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.004912 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dxz8\" (UniqueName: \"kubernetes.io/projected/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-kube-api-access-7dxz8\") pod \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.005140 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-catalog-content\") pod \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\" (UID: \"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12\") " Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.005861 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-utilities" (OuterVolumeSpecName: "utilities") pod "0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" (UID: "0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.023942 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-kube-api-access-7dxz8" (OuterVolumeSpecName: "kube-api-access-7dxz8") pod "0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" (UID: "0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12"). InnerVolumeSpecName "kube-api-access-7dxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.107996 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dxz8\" (UniqueName: \"kubernetes.io/projected/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-kube-api-access-7dxz8\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.108038 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.110672 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" (UID: "0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.210676 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.445950 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-csp2w" event={"ID":"0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12","Type":"ContainerDied","Data":"9d2729fc9dc42734e4a497940aed92563afd013798ad65dde183dc249896fe6a"} Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.446010 4801 scope.go:117] "RemoveContainer" containerID="133defd93d13371d3885077d675e31a89fa07e2d36f551de1372f7f2c59fc7d3" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.446097 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-csp2w" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.493477 4801 scope.go:117] "RemoveContainer" containerID="146f51ee9e327563a7255071d088db9cfa87c477c63871ff389f6cffd8f54a48" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.507796 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-csp2w"] Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.512224 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-csp2w"] Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.523659 4801 scope.go:117] "RemoveContainer" containerID="cda2b00e6d4b5186125564439912430c6a4c44a568f43bc1de9418e4e600fc35" Nov 24 21:20:10 crc kubenswrapper[4801]: I1124 21:20:10.681163 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" path="/var/lib/kubelet/pods/0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12/volumes" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.915128 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tzmc9"] Nov 24 21:20:13 crc kubenswrapper[4801]: E1124 21:20:13.917313 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="extract-utilities" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.917431 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="extract-utilities" Nov 24 21:20:13 crc kubenswrapper[4801]: E1124 21:20:13.917512 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="extract-utilities" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.917574 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="extract-utilities" Nov 24 21:20:13 crc kubenswrapper[4801]: E1124 21:20:13.917657 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="registry-server" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.917713 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="registry-server" Nov 24 21:20:13 crc kubenswrapper[4801]: E1124 21:20:13.917769 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="extract-content" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.917822 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="extract-content" Nov 24 21:20:13 crc kubenswrapper[4801]: E1124 21:20:13.917879 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="registry-server" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.917933 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="registry-server" Nov 24 21:20:13 crc kubenswrapper[4801]: E1124 21:20:13.918004 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="extract-content" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.918064 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="extract-content" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.918239 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4b840f-3776-4550-b1e0-ec9ab4cbacda" containerName="registry-server" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.918304 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce2d6c9-fbc7-4560-94a7-c1c3d8d78c12" containerName="registry-server" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.919598 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:13 crc kubenswrapper[4801]: I1124 21:20:13.924814 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzmc9"] Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.079925 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-utilities\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.080860 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtnm6\" (UniqueName: \"kubernetes.io/projected/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-kube-api-access-xtnm6\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.081116 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-catalog-content\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.182434 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-utilities\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.182875 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtnm6\" (UniqueName: \"kubernetes.io/projected/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-kube-api-access-xtnm6\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.182952 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-utilities\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.183264 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-catalog-content\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.183568 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-catalog-content\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.220165 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtnm6\" (UniqueName: \"kubernetes.io/projected/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-kube-api-access-xtnm6\") pod \"certified-operators-tzmc9\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.242107 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.692548 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:20:14 crc kubenswrapper[4801]: I1124 21:20:14.724173 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzmc9"] Nov 24 21:20:14 crc kubenswrapper[4801]: W1124 21:20:14.736115 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adfe124_a7e2_4193_a9a5_46cf4ba782ce.slice/crio-5575da7e99a22ac78cf3cfe971c1954f4cb70809b67c2068e8f80b59ecac5dcf WatchSource:0}: Error finding container 5575da7e99a22ac78cf3cfe971c1954f4cb70809b67c2068e8f80b59ecac5dcf: Status 404 returned error can't find the container with id 5575da7e99a22ac78cf3cfe971c1954f4cb70809b67c2068e8f80b59ecac5dcf Nov 24 21:20:15 crc kubenswrapper[4801]: I1124 21:20:15.487871 4801 generic.go:334] "Generic (PLEG): container finished" podID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerID="dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2" exitCode=0 Nov 24 21:20:15 crc kubenswrapper[4801]: I1124 21:20:15.487971 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerDied","Data":"dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2"} Nov 24 21:20:15 crc kubenswrapper[4801]: I1124 21:20:15.488293 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerStarted","Data":"5575da7e99a22ac78cf3cfe971c1954f4cb70809b67c2068e8f80b59ecac5dcf"} Nov 24 21:20:16 crc kubenswrapper[4801]: I1124 21:20:16.498423 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerStarted","Data":"03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e"} Nov 24 21:20:17 crc kubenswrapper[4801]: I1124 21:20:17.510193 4801 generic.go:334] "Generic (PLEG): container finished" podID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerID="03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e" exitCode=0 Nov 24 21:20:17 crc kubenswrapper[4801]: I1124 21:20:17.510240 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerDied","Data":"03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e"} Nov 24 21:20:18 crc kubenswrapper[4801]: I1124 21:20:18.521906 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerStarted","Data":"4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd"} Nov 24 21:20:18 crc kubenswrapper[4801]: I1124 21:20:18.543560 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tzmc9" podStartSLOduration=2.8991566239999997 podStartE2EDuration="5.543520126s" podCreationTimestamp="2025-11-24 21:20:13 +0000 UTC" firstStartedPulling="2025-11-24 21:20:15.490787232 +0000 UTC m=+787.573373912" lastFinishedPulling="2025-11-24 21:20:18.135150744 +0000 UTC m=+790.217737414" observedRunningTime="2025-11-24 21:20:18.539833393 +0000 UTC m=+790.622420073" watchObservedRunningTime="2025-11-24 21:20:18.543520126 +0000 UTC m=+790.626106806" Nov 24 21:20:24 crc kubenswrapper[4801]: I1124 21:20:24.242766 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:24 crc kubenswrapper[4801]: I1124 21:20:24.243542 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:24 crc kubenswrapper[4801]: I1124 21:20:24.313886 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:24 crc kubenswrapper[4801]: I1124 21:20:24.642395 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:24 crc kubenswrapper[4801]: I1124 21:20:24.712524 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzmc9"] Nov 24 21:20:26 crc kubenswrapper[4801]: I1124 21:20:26.586791 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tzmc9" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="registry-server" containerID="cri-o://4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd" gracePeriod=2 Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.028156 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.143152 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-utilities\") pod \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.143549 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-catalog-content\") pod \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.143607 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtnm6\" (UniqueName: \"kubernetes.io/projected/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-kube-api-access-xtnm6\") pod \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\" (UID: \"6adfe124-a7e2-4193-a9a5-46cf4ba782ce\") " Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.145084 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-utilities" (OuterVolumeSpecName: "utilities") pod "6adfe124-a7e2-4193-a9a5-46cf4ba782ce" (UID: "6adfe124-a7e2-4193-a9a5-46cf4ba782ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.152230 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-kube-api-access-xtnm6" (OuterVolumeSpecName: "kube-api-access-xtnm6") pod "6adfe124-a7e2-4193-a9a5-46cf4ba782ce" (UID: "6adfe124-a7e2-4193-a9a5-46cf4ba782ce"). InnerVolumeSpecName "kube-api-access-xtnm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.255779 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtnm6\" (UniqueName: \"kubernetes.io/projected/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-kube-api-access-xtnm6\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.255830 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.605066 4801 generic.go:334] "Generic (PLEG): container finished" podID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerID="4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd" exitCode=0 Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.605121 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerDied","Data":"4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd"} Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.605149 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmc9" event={"ID":"6adfe124-a7e2-4193-a9a5-46cf4ba782ce","Type":"ContainerDied","Data":"5575da7e99a22ac78cf3cfe971c1954f4cb70809b67c2068e8f80b59ecac5dcf"} Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.605155 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmc9" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.605167 4801 scope.go:117] "RemoveContainer" containerID="4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.626668 4801 scope.go:117] "RemoveContainer" containerID="03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.651434 4801 scope.go:117] "RemoveContainer" containerID="dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.678323 4801 scope.go:117] "RemoveContainer" containerID="4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd" Nov 24 21:20:27 crc kubenswrapper[4801]: E1124 21:20:27.678867 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd\": container with ID starting with 4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd not found: ID does not exist" containerID="4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.678900 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd"} err="failed to get container status \"4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd\": rpc error: code = NotFound desc = could not find container \"4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd\": container with ID starting with 4a11e50f6c0bd3899e6c6ac22e0587d5b299ac9fce90c98e1902ea25f78ce7cd not found: ID does not exist" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.678928 4801 scope.go:117] "RemoveContainer" containerID="03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e" Nov 24 21:20:27 crc kubenswrapper[4801]: E1124 21:20:27.679599 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e\": container with ID starting with 03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e not found: ID does not exist" containerID="03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.679622 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e"} err="failed to get container status \"03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e\": rpc error: code = NotFound desc = could not find container \"03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e\": container with ID starting with 03343c32f1e741e51274f636f2f0d4630be5832bbd9e559879295620c2f7ed4e not found: ID does not exist" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.679640 4801 scope.go:117] "RemoveContainer" containerID="dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2" Nov 24 21:20:27 crc kubenswrapper[4801]: E1124 21:20:27.681411 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2\": container with ID starting with dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2 not found: ID does not exist" containerID="dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.681476 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2"} err="failed to get container status \"dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2\": rpc error: code = NotFound desc = could not find container \"dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2\": container with ID starting with dbb6a115d236ecbbc3be65fb71ba3fbae2a488c9485b03f6aba71710772c29d2 not found: ID does not exist" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.683803 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6adfe124-a7e2-4193-a9a5-46cf4ba782ce" (UID: "6adfe124-a7e2-4193-a9a5-46cf4ba782ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.764402 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfe124-a7e2-4193-a9a5-46cf4ba782ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.940887 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzmc9"] Nov 24 21:20:27 crc kubenswrapper[4801]: I1124 21:20:27.948062 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tzmc9"] Nov 24 21:20:28 crc kubenswrapper[4801]: I1124 21:20:28.678604 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" path="/var/lib/kubelet/pods/6adfe124-a7e2-4193-a9a5-46cf4ba782ce/volumes" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.437703 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rjtr2"] Nov 24 21:20:33 crc kubenswrapper[4801]: E1124 21:20:33.438665 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="registry-server" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.438685 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="registry-server" Nov 24 21:20:33 crc kubenswrapper[4801]: E1124 21:20:33.438712 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="extract-content" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.438722 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="extract-content" Nov 24 21:20:33 crc kubenswrapper[4801]: E1124 21:20:33.438750 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="extract-utilities" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.438760 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="extract-utilities" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.438940 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adfe124-a7e2-4193-a9a5-46cf4ba782ce" containerName="registry-server" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.439770 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.445260 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-w6bnl" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.446524 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.446624 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.446841 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.446880 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.466397 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rjtr2"] Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.470674 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494472 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494546 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd80700-ca76-483e-b4aa-78e734531916-tmp\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494622 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-sa-token\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494642 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-metrics\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494723 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494757 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config-openshift-service-cacrt\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494810 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbd80700-ca76-483e-b4aa-78e734531916-datadir\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494861 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5pz\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-kube-api-access-st5pz\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494889 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-trusted-ca\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494940 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-token\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.494959 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-entrypoint\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.546838 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rjtr2"] Nov 24 21:20:33 crc kubenswrapper[4801]: E1124 21:20:33.549025 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-st5pz metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-rjtr2" podUID="bbd80700-ca76-483e-b4aa-78e734531916" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.596833 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd80700-ca76-483e-b4aa-78e734531916-tmp\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.596910 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-sa-token\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.596933 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-metrics\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.596974 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597000 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config-openshift-service-cacrt\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597033 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbd80700-ca76-483e-b4aa-78e734531916-datadir\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597060 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5pz\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-kube-api-access-st5pz\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597079 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-trusted-ca\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-token\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597126 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-entrypoint\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.597155 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.598029 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.598926 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbd80700-ca76-483e-b4aa-78e734531916-datadir\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: E1124 21:20:33.599130 4801 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Nov 24 21:20:33 crc kubenswrapper[4801]: E1124 21:20:33.599217 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver podName:bbd80700-ca76-483e-b4aa-78e734531916 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:34.099188965 +0000 UTC m=+806.181775655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver") pod "collector-rjtr2" (UID: "bbd80700-ca76-483e-b4aa-78e734531916") : secret "collector-syslog-receiver" not found Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.599768 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config-openshift-service-cacrt\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.600792 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-trusted-ca\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.600812 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-entrypoint\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.605746 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd80700-ca76-483e-b4aa-78e734531916-tmp\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.606385 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-metrics\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.606519 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-token\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.615861 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-sa-token\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.631178 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5pz\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-kube-api-access-st5pz\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.656908 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.673498 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rjtr2" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.698294 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5pz\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-kube-api-access-st5pz\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.698783 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.698959 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-metrics\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699085 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-trusted-ca\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699259 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config-openshift-service-cacrt\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699388 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-sa-token\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699508 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbd80700-ca76-483e-b4aa-78e734531916-datadir\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699622 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-entrypoint\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699792 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-token\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699924 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd80700-ca76-483e-b4aa-78e734531916-tmp\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699491 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config" (OuterVolumeSpecName: "config") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699658 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699689 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbd80700-ca76-483e-b4aa-78e734531916-datadir" (OuterVolumeSpecName: "datadir") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.699835 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.700217 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.701001 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.701142 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.701236 4801 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.701318 4801 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbd80700-ca76-483e-b4aa-78e734531916-datadir\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.701437 4801 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbd80700-ca76-483e-b4aa-78e734531916-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.702569 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-metrics" (OuterVolumeSpecName: "metrics") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.703096 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-kube-api-access-st5pz" (OuterVolumeSpecName: "kube-api-access-st5pz") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "kube-api-access-st5pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.703276 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-sa-token" (OuterVolumeSpecName: "sa-token") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.704831 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-token" (OuterVolumeSpecName: "collector-token") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.709281 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd80700-ca76-483e-b4aa-78e734531916-tmp" (OuterVolumeSpecName: "tmp") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.803227 4801 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.803270 4801 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd80700-ca76-483e-b4aa-78e734531916-tmp\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.803281 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st5pz\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-kube-api-access-st5pz\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.803293 4801 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:33 crc kubenswrapper[4801]: I1124 21:20:33.803305 4801 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbd80700-ca76-483e-b4aa-78e734531916-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.110144 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.114249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver\") pod \"collector-rjtr2\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " pod="openshift-logging/collector-rjtr2" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.212165 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver\") pod \"bbd80700-ca76-483e-b4aa-78e734531916\" (UID: \"bbd80700-ca76-483e-b4aa-78e734531916\") " Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.216159 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "bbd80700-ca76-483e-b4aa-78e734531916" (UID: "bbd80700-ca76-483e-b4aa-78e734531916"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.315917 4801 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbd80700-ca76-483e-b4aa-78e734531916-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.668829 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rjtr2" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.746244 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rjtr2"] Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.783592 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rjtr2"] Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.809519 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-hd6fb"] Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.812141 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.816609 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-w6bnl" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.817175 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.817326 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.817509 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.817714 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-hd6fb"] Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.818014 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.849865 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-entrypoint\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.850937 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-collector-syslog-receiver\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.850976 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcq5\" (UniqueName: \"kubernetes.io/projected/1128bf46-e782-4916-8554-b9e6e6ed28f5-kube-api-access-6jcq5\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851042 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/1128bf46-e782-4916-8554-b9e6e6ed28f5-sa-token\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851086 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-trusted-ca\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851254 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-metrics\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851282 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-config\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851315 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1128bf46-e782-4916-8554-b9e6e6ed28f5-tmp\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851355 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/1128bf46-e782-4916-8554-b9e6e6ed28f5-datadir\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851439 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-collector-token\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.851491 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-config-openshift-service-cacrt\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.856465 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.954653 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-entrypoint\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.954920 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-collector-syslog-receiver\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.954987 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcq5\" (UniqueName: \"kubernetes.io/projected/1128bf46-e782-4916-8554-b9e6e6ed28f5-kube-api-access-6jcq5\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955041 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/1128bf46-e782-4916-8554-b9e6e6ed28f5-sa-token\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955069 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-trusted-ca\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-metrics\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955217 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-config\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955241 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1128bf46-e782-4916-8554-b9e6e6ed28f5-tmp\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955295 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/1128bf46-e782-4916-8554-b9e6e6ed28f5-datadir\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955330 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-collector-token\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955378 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-config-openshift-service-cacrt\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955929 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/1128bf46-e782-4916-8554-b9e6e6ed28f5-datadir\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.955942 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-entrypoint\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.956650 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-config\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.957663 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-config-openshift-service-cacrt\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.957814 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1128bf46-e782-4916-8554-b9e6e6ed28f5-trusted-ca\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.961681 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-metrics\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.968502 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-collector-token\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.970894 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/1128bf46-e782-4916-8554-b9e6e6ed28f5-collector-syslog-receiver\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.971071 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1128bf46-e782-4916-8554-b9e6e6ed28f5-tmp\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.976255 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/1128bf46-e782-4916-8554-b9e6e6ed28f5-sa-token\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:34 crc kubenswrapper[4801]: I1124 21:20:34.979299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcq5\" (UniqueName: \"kubernetes.io/projected/1128bf46-e782-4916-8554-b9e6e6ed28f5-kube-api-access-6jcq5\") pod \"collector-hd6fb\" (UID: \"1128bf46-e782-4916-8554-b9e6e6ed28f5\") " pod="openshift-logging/collector-hd6fb" Nov 24 21:20:35 crc kubenswrapper[4801]: I1124 21:20:35.155476 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-hd6fb" Nov 24 21:20:35 crc kubenswrapper[4801]: I1124 21:20:35.711347 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-hd6fb"] Nov 24 21:20:35 crc kubenswrapper[4801]: W1124 21:20:35.717811 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1128bf46_e782_4916_8554_b9e6e6ed28f5.slice/crio-927c181cb37278333ed9a8180ba67755bfab2c5cb8415f7e732df79b77445471 WatchSource:0}: Error finding container 927c181cb37278333ed9a8180ba67755bfab2c5cb8415f7e732df79b77445471: Status 404 returned error can't find the container with id 927c181cb37278333ed9a8180ba67755bfab2c5cb8415f7e732df79b77445471 Nov 24 21:20:36 crc kubenswrapper[4801]: I1124 21:20:36.681679 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd80700-ca76-483e-b4aa-78e734531916" path="/var/lib/kubelet/pods/bbd80700-ca76-483e-b4aa-78e734531916/volumes" Nov 24 21:20:36 crc kubenswrapper[4801]: I1124 21:20:36.689150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-hd6fb" event={"ID":"1128bf46-e782-4916-8554-b9e6e6ed28f5","Type":"ContainerStarted","Data":"927c181cb37278333ed9a8180ba67755bfab2c5cb8415f7e732df79b77445471"} Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.403816 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p66qs"] Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.406058 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.424771 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p66qs"] Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.450676 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-catalog-content\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.450752 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz974\" (UniqueName: \"kubernetes.io/projected/35d13ddc-78f4-472c-b6e6-533fd57e3021-kube-api-access-vz974\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.450840 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-utilities\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.552226 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz974\" (UniqueName: \"kubernetes.io/projected/35d13ddc-78f4-472c-b6e6-533fd57e3021-kube-api-access-vz974\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.552325 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-utilities\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.552414 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-catalog-content\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.553080 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-catalog-content\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.553128 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-utilities\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.596265 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz974\" (UniqueName: \"kubernetes.io/projected/35d13ddc-78f4-472c-b6e6-533fd57e3021-kube-api-access-vz974\") pod \"community-operators-p66qs\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:39 crc kubenswrapper[4801]: I1124 21:20:39.740062 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:43 crc kubenswrapper[4801]: I1124 21:20:43.626815 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p66qs"] Nov 24 21:20:43 crc kubenswrapper[4801]: W1124 21:20:43.636171 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d13ddc_78f4_472c_b6e6_533fd57e3021.slice/crio-bfdc36464cea4f3c2cbfd449317d8fd6daddfa163acf01f77697fd962cb7933a WatchSource:0}: Error finding container bfdc36464cea4f3c2cbfd449317d8fd6daddfa163acf01f77697fd962cb7933a: Status 404 returned error can't find the container with id bfdc36464cea4f3c2cbfd449317d8fd6daddfa163acf01f77697fd962cb7933a Nov 24 21:20:43 crc kubenswrapper[4801]: I1124 21:20:43.788421 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-hd6fb" event={"ID":"1128bf46-e782-4916-8554-b9e6e6ed28f5","Type":"ContainerStarted","Data":"eefaa9d76f59fd44347ae67fd07bc52731b8dfffbbe5cdadb4844e15cfd1daec"} Nov 24 21:20:43 crc kubenswrapper[4801]: I1124 21:20:43.790246 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerStarted","Data":"382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf"} Nov 24 21:20:43 crc kubenswrapper[4801]: I1124 21:20:43.790318 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerStarted","Data":"bfdc36464cea4f3c2cbfd449317d8fd6daddfa163acf01f77697fd962cb7933a"} Nov 24 21:20:43 crc kubenswrapper[4801]: I1124 21:20:43.818462 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-hd6fb" podStartSLOduration=2.148241833 podStartE2EDuration="9.818432444s" podCreationTimestamp="2025-11-24 21:20:34 +0000 UTC" firstStartedPulling="2025-11-24 21:20:35.721482351 +0000 UTC m=+807.804069021" lastFinishedPulling="2025-11-24 21:20:43.391672962 +0000 UTC m=+815.474259632" observedRunningTime="2025-11-24 21:20:43.813936095 +0000 UTC m=+815.896522765" watchObservedRunningTime="2025-11-24 21:20:43.818432444 +0000 UTC m=+815.901019124" Nov 24 21:20:44 crc kubenswrapper[4801]: I1124 21:20:44.799959 4801 generic.go:334] "Generic (PLEG): container finished" podID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerID="382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf" exitCode=0 Nov 24 21:20:44 crc kubenswrapper[4801]: I1124 21:20:44.800012 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerDied","Data":"382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf"} Nov 24 21:20:45 crc kubenswrapper[4801]: I1124 21:20:45.813520 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerStarted","Data":"787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8"} Nov 24 21:20:46 crc kubenswrapper[4801]: I1124 21:20:46.824985 4801 generic.go:334] "Generic (PLEG): container finished" podID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerID="787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8" exitCode=0 Nov 24 21:20:46 crc kubenswrapper[4801]: I1124 21:20:46.825043 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerDied","Data":"787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8"} Nov 24 21:20:47 crc kubenswrapper[4801]: I1124 21:20:47.843187 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerStarted","Data":"0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f"} Nov 24 21:20:47 crc kubenswrapper[4801]: I1124 21:20:47.867838 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p66qs" podStartSLOduration=6.44367151 podStartE2EDuration="8.867817419s" podCreationTimestamp="2025-11-24 21:20:39 +0000 UTC" firstStartedPulling="2025-11-24 21:20:44.802391591 +0000 UTC m=+816.884978261" lastFinishedPulling="2025-11-24 21:20:47.22653746 +0000 UTC m=+819.309124170" observedRunningTime="2025-11-24 21:20:47.865445926 +0000 UTC m=+819.948032586" watchObservedRunningTime="2025-11-24 21:20:47.867817419 +0000 UTC m=+819.950404089" Nov 24 21:20:49 crc kubenswrapper[4801]: I1124 21:20:49.741089 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:49 crc kubenswrapper[4801]: I1124 21:20:49.742653 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:49 crc kubenswrapper[4801]: I1124 21:20:49.797743 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:59 crc kubenswrapper[4801]: I1124 21:20:59.821424 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:20:59 crc kubenswrapper[4801]: I1124 21:20:59.902877 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p66qs"] Nov 24 21:20:59 crc kubenswrapper[4801]: I1124 21:20:59.943008 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p66qs" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="registry-server" containerID="cri-o://0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f" gracePeriod=2 Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.438356 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.477524 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz974\" (UniqueName: \"kubernetes.io/projected/35d13ddc-78f4-472c-b6e6-533fd57e3021-kube-api-access-vz974\") pod \"35d13ddc-78f4-472c-b6e6-533fd57e3021\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.477705 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-catalog-content\") pod \"35d13ddc-78f4-472c-b6e6-533fd57e3021\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.484496 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-utilities\") pod \"35d13ddc-78f4-472c-b6e6-533fd57e3021\" (UID: \"35d13ddc-78f4-472c-b6e6-533fd57e3021\") " Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.486512 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-utilities" (OuterVolumeSpecName: "utilities") pod "35d13ddc-78f4-472c-b6e6-533fd57e3021" (UID: "35d13ddc-78f4-472c-b6e6-533fd57e3021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.498502 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d13ddc-78f4-472c-b6e6-533fd57e3021-kube-api-access-vz974" (OuterVolumeSpecName: "kube-api-access-vz974") pod "35d13ddc-78f4-472c-b6e6-533fd57e3021" (UID: "35d13ddc-78f4-472c-b6e6-533fd57e3021"). InnerVolumeSpecName "kube-api-access-vz974". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.539247 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35d13ddc-78f4-472c-b6e6-533fd57e3021" (UID: "35d13ddc-78f4-472c-b6e6-533fd57e3021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.588830 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz974\" (UniqueName: \"kubernetes.io/projected/35d13ddc-78f4-472c-b6e6-533fd57e3021-kube-api-access-vz974\") on node \"crc\" DevicePath \"\"" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.588882 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.588896 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d13ddc-78f4-472c-b6e6-533fd57e3021-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.958426 4801 generic.go:334] "Generic (PLEG): container finished" podID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerID="0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f" exitCode=0 Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.958482 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerDied","Data":"0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f"} Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.958524 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p66qs" event={"ID":"35d13ddc-78f4-472c-b6e6-533fd57e3021","Type":"ContainerDied","Data":"bfdc36464cea4f3c2cbfd449317d8fd6daddfa163acf01f77697fd962cb7933a"} Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.958545 4801 scope.go:117] "RemoveContainer" containerID="0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.958589 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p66qs" Nov 24 21:21:00 crc kubenswrapper[4801]: I1124 21:21:00.997231 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p66qs"] Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.005922 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p66qs"] Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.008880 4801 scope.go:117] "RemoveContainer" containerID="787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.049688 4801 scope.go:117] "RemoveContainer" containerID="382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.083319 4801 scope.go:117] "RemoveContainer" containerID="0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f" Nov 24 21:21:01 crc kubenswrapper[4801]: E1124 21:21:01.083985 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f\": container with ID starting with 0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f not found: ID does not exist" containerID="0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.084040 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f"} err="failed to get container status \"0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f\": rpc error: code = NotFound desc = could not find container \"0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f\": container with ID starting with 0215b6e1a0b372963cac692f72ee36719364cd35c9fad038ed456928e7ea735f not found: ID does not exist" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.084080 4801 scope.go:117] "RemoveContainer" containerID="787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8" Nov 24 21:21:01 crc kubenswrapper[4801]: E1124 21:21:01.084532 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8\": container with ID starting with 787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8 not found: ID does not exist" containerID="787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.084585 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8"} err="failed to get container status \"787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8\": rpc error: code = NotFound desc = could not find container \"787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8\": container with ID starting with 787d26ae70afca9cf7a4411a6009009152c929a816a9eb676a5e7b91425be9f8 not found: ID does not exist" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.084623 4801 scope.go:117] "RemoveContainer" containerID="382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf" Nov 24 21:21:01 crc kubenswrapper[4801]: E1124 21:21:01.084941 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf\": container with ID starting with 382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf not found: ID does not exist" containerID="382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf" Nov 24 21:21:01 crc kubenswrapper[4801]: I1124 21:21:01.084982 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf"} err="failed to get container status \"382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf\": rpc error: code = NotFound desc = could not find container \"382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf\": container with ID starting with 382f8d8b6dbfcd331dbebb64cb784083220431df3e0fea02f9e527ba8049b7cf not found: ID does not exist" Nov 24 21:21:02 crc kubenswrapper[4801]: I1124 21:21:02.692244 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" path="/var/lib/kubelet/pods/35d13ddc-78f4-472c-b6e6-533fd57e3021/volumes" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.524072 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l"] Nov 24 21:21:15 crc kubenswrapper[4801]: E1124 21:21:15.525234 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="extract-content" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.525257 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="extract-content" Nov 24 21:21:15 crc kubenswrapper[4801]: E1124 21:21:15.525279 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="extract-utilities" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.525293 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="extract-utilities" Nov 24 21:21:15 crc kubenswrapper[4801]: E1124 21:21:15.525318 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="registry-server" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.525331 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="registry-server" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.525636 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d13ddc-78f4-472c-b6e6-533fd57e3021" containerName="registry-server" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.527636 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.529591 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.534534 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l"] Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.593997 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.594060 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.594159 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trl4s\" (UniqueName: \"kubernetes.io/projected/356996ee-8ace-43af-8ef1-4ca116d86ffe-kube-api-access-trl4s\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.695347 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trl4s\" (UniqueName: \"kubernetes.io/projected/356996ee-8ace-43af-8ef1-4ca116d86ffe-kube-api-access-trl4s\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.695449 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.695477 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.695947 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.696211 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.721939 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trl4s\" (UniqueName: \"kubernetes.io/projected/356996ee-8ace-43af-8ef1-4ca116d86ffe-kube-api-access-trl4s\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:15 crc kubenswrapper[4801]: I1124 21:21:15.852065 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:16 crc kubenswrapper[4801]: I1124 21:21:16.461000 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l"] Nov 24 21:21:17 crc kubenswrapper[4801]: I1124 21:21:17.140202 4801 generic.go:334] "Generic (PLEG): container finished" podID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerID="568c8c14c1ed8258a58d66285d8ffc181082c8a1fe9120987202ebabb315c0ef" exitCode=0 Nov 24 21:21:17 crc kubenswrapper[4801]: I1124 21:21:17.140278 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" event={"ID":"356996ee-8ace-43af-8ef1-4ca116d86ffe","Type":"ContainerDied","Data":"568c8c14c1ed8258a58d66285d8ffc181082c8a1fe9120987202ebabb315c0ef"} Nov 24 21:21:17 crc kubenswrapper[4801]: I1124 21:21:17.140647 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" event={"ID":"356996ee-8ace-43af-8ef1-4ca116d86ffe","Type":"ContainerStarted","Data":"4146a0618ff2c27dd6ea6b2af4d5e784b9072047a9207a19ad186d039861387e"} Nov 24 21:21:19 crc kubenswrapper[4801]: I1124 21:21:19.161962 4801 generic.go:334] "Generic (PLEG): container finished" podID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerID="10049cd650d62ad6c87fda37490830705cce6e391eb4297d272aada13be627b8" exitCode=0 Nov 24 21:21:19 crc kubenswrapper[4801]: I1124 21:21:19.162071 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" event={"ID":"356996ee-8ace-43af-8ef1-4ca116d86ffe","Type":"ContainerDied","Data":"10049cd650d62ad6c87fda37490830705cce6e391eb4297d272aada13be627b8"} Nov 24 21:21:20 crc kubenswrapper[4801]: I1124 21:21:20.171389 4801 generic.go:334] "Generic (PLEG): container finished" podID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerID="976563d8fa088ce5012625e33edc5faca31db113a0f7d1a989dc2ab1b1285895" exitCode=0 Nov 24 21:21:20 crc kubenswrapper[4801]: I1124 21:21:20.171482 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" event={"ID":"356996ee-8ace-43af-8ef1-4ca116d86ffe","Type":"ContainerDied","Data":"976563d8fa088ce5012625e33edc5faca31db113a0f7d1a989dc2ab1b1285895"} Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.561092 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.718725 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-bundle\") pod \"356996ee-8ace-43af-8ef1-4ca116d86ffe\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.719300 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-util\") pod \"356996ee-8ace-43af-8ef1-4ca116d86ffe\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.719333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trl4s\" (UniqueName: \"kubernetes.io/projected/356996ee-8ace-43af-8ef1-4ca116d86ffe-kube-api-access-trl4s\") pod \"356996ee-8ace-43af-8ef1-4ca116d86ffe\" (UID: \"356996ee-8ace-43af-8ef1-4ca116d86ffe\") " Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.719794 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-bundle" (OuterVolumeSpecName: "bundle") pod "356996ee-8ace-43af-8ef1-4ca116d86ffe" (UID: "356996ee-8ace-43af-8ef1-4ca116d86ffe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.728569 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356996ee-8ace-43af-8ef1-4ca116d86ffe-kube-api-access-trl4s" (OuterVolumeSpecName: "kube-api-access-trl4s") pod "356996ee-8ace-43af-8ef1-4ca116d86ffe" (UID: "356996ee-8ace-43af-8ef1-4ca116d86ffe"). InnerVolumeSpecName "kube-api-access-trl4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.734918 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-util" (OuterVolumeSpecName: "util") pod "356996ee-8ace-43af-8ef1-4ca116d86ffe" (UID: "356996ee-8ace-43af-8ef1-4ca116d86ffe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.820941 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.820998 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trl4s\" (UniqueName: \"kubernetes.io/projected/356996ee-8ace-43af-8ef1-4ca116d86ffe-kube-api-access-trl4s\") on node \"crc\" DevicePath \"\"" Nov 24 21:21:21 crc kubenswrapper[4801]: I1124 21:21:21.821012 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/356996ee-8ace-43af-8ef1-4ca116d86ffe-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:21:22 crc kubenswrapper[4801]: I1124 21:21:22.193590 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" event={"ID":"356996ee-8ace-43af-8ef1-4ca116d86ffe","Type":"ContainerDied","Data":"4146a0618ff2c27dd6ea6b2af4d5e784b9072047a9207a19ad186d039861387e"} Nov 24 21:21:22 crc kubenswrapper[4801]: I1124 21:21:22.194078 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4146a0618ff2c27dd6ea6b2af4d5e784b9072047a9207a19ad186d039861387e" Nov 24 21:21:22 crc kubenswrapper[4801]: I1124 21:21:22.193718 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.660106 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nqqlk"] Nov 24 21:21:24 crc kubenswrapper[4801]: E1124 21:21:24.660480 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="util" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.660497 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="util" Nov 24 21:21:24 crc kubenswrapper[4801]: E1124 21:21:24.660521 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="extract" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.660529 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="extract" Nov 24 21:21:24 crc kubenswrapper[4801]: E1124 21:21:24.660542 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="pull" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.660551 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="pull" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.660731 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="356996ee-8ace-43af-8ef1-4ca116d86ffe" containerName="extract" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.661414 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.664459 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rv8pd" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.667002 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.677731 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.680355 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nqqlk"] Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.779298 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ct2\" (UniqueName: \"kubernetes.io/projected/c739d740-88a4-4d6d-875d-77434b46042b-kube-api-access-66ct2\") pod \"nmstate-operator-557fdffb88-nqqlk\" (UID: \"c739d740-88a4-4d6d-875d-77434b46042b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.881412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ct2\" (UniqueName: \"kubernetes.io/projected/c739d740-88a4-4d6d-875d-77434b46042b-kube-api-access-66ct2\") pod \"nmstate-operator-557fdffb88-nqqlk\" (UID: \"c739d740-88a4-4d6d-875d-77434b46042b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.919116 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ct2\" (UniqueName: \"kubernetes.io/projected/c739d740-88a4-4d6d-875d-77434b46042b-kube-api-access-66ct2\") pod \"nmstate-operator-557fdffb88-nqqlk\" (UID: \"c739d740-88a4-4d6d-875d-77434b46042b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" Nov 24 21:21:24 crc kubenswrapper[4801]: I1124 21:21:24.984802 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" Nov 24 21:21:25 crc kubenswrapper[4801]: I1124 21:21:25.453715 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nqqlk"] Nov 24 21:21:26 crc kubenswrapper[4801]: I1124 21:21:26.224348 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" event={"ID":"c739d740-88a4-4d6d-875d-77434b46042b","Type":"ContainerStarted","Data":"107813b970ddff899e471fe10d5926af78643b7b3f200faa8a47c0e0bdabd19b"} Nov 24 21:21:29 crc kubenswrapper[4801]: I1124 21:21:29.249603 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" event={"ID":"c739d740-88a4-4d6d-875d-77434b46042b","Type":"ContainerStarted","Data":"abcfc60e7ab32ce470f7c125de6cf3318f72749704e9015f4d26d07024fb2ea9"} Nov 24 21:21:29 crc kubenswrapper[4801]: I1124 21:21:29.281242 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-nqqlk" podStartSLOduration=2.606838271 podStartE2EDuration="5.281222579s" podCreationTimestamp="2025-11-24 21:21:24 +0000 UTC" firstStartedPulling="2025-11-24 21:21:25.464973052 +0000 UTC m=+857.547559762" lastFinishedPulling="2025-11-24 21:21:28.13935739 +0000 UTC m=+860.221944070" observedRunningTime="2025-11-24 21:21:29.276088681 +0000 UTC m=+861.358675371" watchObservedRunningTime="2025-11-24 21:21:29.281222579 +0000 UTC m=+861.363809249" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.359510 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.360728 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.363089 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dwql8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.377391 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.383100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl662\" (UniqueName: \"kubernetes.io/projected/22bf0a52-c439-4e4e-95cf-f34685cc8185-kube-api-access-xl662\") pod \"nmstate-metrics-5dcf9c57c5-xxkcq\" (UID: \"22bf0a52-c439-4e4e-95cf-f34685cc8185\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.384587 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.384701 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.386974 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.409407 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.420050 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-db4b2"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.425619 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.485409 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pkc\" (UniqueName: \"kubernetes.io/projected/5d6ad10f-790a-49ed-aff0-13a1bd01c476-kube-api-access-96pkc\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.485709 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl662\" (UniqueName: \"kubernetes.io/projected/22bf0a52-c439-4e4e-95cf-f34685cc8185-kube-api-access-xl662\") pod \"nmstate-metrics-5dcf9c57c5-xxkcq\" (UID: \"22bf0a52-c439-4e4e-95cf-f34685cc8185\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.485762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-nmstate-lock\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.485793 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/26c45212-f2f9-452d-adb5-64ab0fd1448b-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-4m2h8\" (UID: \"26c45212-f2f9-452d-adb5-64ab0fd1448b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.486176 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-ovs-socket\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.486281 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-dbus-socket\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.486357 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrtxv\" (UniqueName: \"kubernetes.io/projected/26c45212-f2f9-452d-adb5-64ab0fd1448b-kube-api-access-nrtxv\") pod \"nmstate-webhook-6b89b748d8-4m2h8\" (UID: \"26c45212-f2f9-452d-adb5-64ab0fd1448b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.524940 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl662\" (UniqueName: \"kubernetes.io/projected/22bf0a52-c439-4e4e-95cf-f34685cc8185-kube-api-access-xl662\") pod \"nmstate-metrics-5dcf9c57c5-xxkcq\" (UID: \"22bf0a52-c439-4e4e-95cf-f34685cc8185\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.543051 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.543999 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.552489 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.552672 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.552785 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-8bfqg" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.569432 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.588858 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5944a8e8-b2b1-4b07-8009-daa3215b3612-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589345 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-nmstate-lock\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589472 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/26c45212-f2f9-452d-adb5-64ab0fd1448b-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-4m2h8\" (UID: \"26c45212-f2f9-452d-adb5-64ab0fd1448b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589545 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scwk\" (UniqueName: \"kubernetes.io/projected/5944a8e8-b2b1-4b07-8009-daa3215b3612-kube-api-access-7scwk\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589639 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-ovs-socket\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589692 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5944a8e8-b2b1-4b07-8009-daa3215b3612-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589716 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-dbus-socket\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589765 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrtxv\" (UniqueName: \"kubernetes.io/projected/26c45212-f2f9-452d-adb5-64ab0fd1448b-kube-api-access-nrtxv\") pod \"nmstate-webhook-6b89b748d8-4m2h8\" (UID: \"26c45212-f2f9-452d-adb5-64ab0fd1448b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.589928 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96pkc\" (UniqueName: \"kubernetes.io/projected/5d6ad10f-790a-49ed-aff0-13a1bd01c476-kube-api-access-96pkc\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.590522 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-nmstate-lock\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.591718 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-ovs-socket\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.592036 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d6ad10f-790a-49ed-aff0-13a1bd01c476-dbus-socket\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.612859 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrtxv\" (UniqueName: \"kubernetes.io/projected/26c45212-f2f9-452d-adb5-64ab0fd1448b-kube-api-access-nrtxv\") pod \"nmstate-webhook-6b89b748d8-4m2h8\" (UID: \"26c45212-f2f9-452d-adb5-64ab0fd1448b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.612944 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pkc\" (UniqueName: \"kubernetes.io/projected/5d6ad10f-790a-49ed-aff0-13a1bd01c476-kube-api-access-96pkc\") pod \"nmstate-handler-db4b2\" (UID: \"5d6ad10f-790a-49ed-aff0-13a1bd01c476\") " pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.615761 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/26c45212-f2f9-452d-adb5-64ab0fd1448b-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-4m2h8\" (UID: \"26c45212-f2f9-452d-adb5-64ab0fd1448b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.679763 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.692723 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scwk\" (UniqueName: \"kubernetes.io/projected/5944a8e8-b2b1-4b07-8009-daa3215b3612-kube-api-access-7scwk\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.692854 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5944a8e8-b2b1-4b07-8009-daa3215b3612-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.692930 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5944a8e8-b2b1-4b07-8009-daa3215b3612-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: E1124 21:21:30.693234 4801 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 24 21:21:30 crc kubenswrapper[4801]: E1124 21:21:30.693406 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5944a8e8-b2b1-4b07-8009-daa3215b3612-plugin-serving-cert podName:5944a8e8-b2b1-4b07-8009-daa3215b3612 nodeName:}" failed. No retries permitted until 2025-11-24 21:21:31.193355076 +0000 UTC m=+863.275941746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5944a8e8-b2b1-4b07-8009-daa3215b3612-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-hrx9b" (UID: "5944a8e8-b2b1-4b07-8009-daa3215b3612") : secret "plugin-serving-cert" not found Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.693858 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5944a8e8-b2b1-4b07-8009-daa3215b3612-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.705099 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.719273 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scwk\" (UniqueName: \"kubernetes.io/projected/5944a8e8-b2b1-4b07-8009-daa3215b3612-kube-api-access-7scwk\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.750544 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.764092 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6675df4db6-8rtvb"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.766410 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.791711 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6675df4db6-8rtvb"] Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794163 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-config\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794211 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-oauth-config\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794245 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-serving-cert\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794264 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-trusted-ca-bundle\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794282 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-oauth-serving-cert\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794305 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphbh\" (UniqueName: \"kubernetes.io/projected/960ccabe-478a-4c38-ab6b-6aa9865d05d1-kube-api-access-lphbh\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.794358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-service-ca\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897111 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-config\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897164 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-oauth-config\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897190 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-serving-cert\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897208 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-trusted-ca-bundle\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897230 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-oauth-serving-cert\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897243 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lphbh\" (UniqueName: \"kubernetes.io/projected/960ccabe-478a-4c38-ab6b-6aa9865d05d1-kube-api-access-lphbh\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.897292 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-service-ca\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.898511 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-service-ca\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.899089 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-config\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.905963 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-serving-cert\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.906733 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-trusted-ca-bundle\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.907276 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-oauth-serving-cert\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.907912 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-oauth-config\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:30 crc kubenswrapper[4801]: I1124 21:21:30.932074 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lphbh\" (UniqueName: \"kubernetes.io/projected/960ccabe-478a-4c38-ab6b-6aa9865d05d1-kube-api-access-lphbh\") pod \"console-6675df4db6-8rtvb\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.092554 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.202649 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5944a8e8-b2b1-4b07-8009-daa3215b3612-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.207964 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5944a8e8-b2b1-4b07-8009-daa3215b3612-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hrx9b\" (UID: \"5944a8e8-b2b1-4b07-8009-daa3215b3612\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.253008 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq"] Nov 24 21:21:31 crc kubenswrapper[4801]: W1124 21:21:31.268116 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22bf0a52_c439_4e4e_95cf_f34685cc8185.slice/crio-95d1d17ad6983cfadc33e61b140079ea2d425dc2142b36096ee0c867f163ce61 WatchSource:0}: Error finding container 95d1d17ad6983cfadc33e61b140079ea2d425dc2142b36096ee0c867f163ce61: Status 404 returned error can't find the container with id 95d1d17ad6983cfadc33e61b140079ea2d425dc2142b36096ee0c867f163ce61 Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.268208 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-db4b2" event={"ID":"5d6ad10f-790a-49ed-aff0-13a1bd01c476","Type":"ContainerStarted","Data":"c3c15c8e2668258fbb584f4d0ffb7a3ed3f5d14d84b8c48258f40c325eec0556"} Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.371799 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8"] Nov 24 21:21:31 crc kubenswrapper[4801]: W1124 21:21:31.376012 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26c45212_f2f9_452d_adb5_64ab0fd1448b.slice/crio-978e59b0ec157f99836e604d111ec1de9436e68fcf3d334fc943385fcce21712 WatchSource:0}: Error finding container 978e59b0ec157f99836e604d111ec1de9436e68fcf3d334fc943385fcce21712: Status 404 returned error can't find the container with id 978e59b0ec157f99836e604d111ec1de9436e68fcf3d334fc943385fcce21712 Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.476532 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.628935 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6675df4db6-8rtvb"] Nov 24 21:21:31 crc kubenswrapper[4801]: I1124 21:21:31.862217 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b"] Nov 24 21:21:32 crc kubenswrapper[4801]: I1124 21:21:32.280401 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" event={"ID":"22bf0a52-c439-4e4e-95cf-f34685cc8185","Type":"ContainerStarted","Data":"95d1d17ad6983cfadc33e61b140079ea2d425dc2142b36096ee0c867f163ce61"} Nov 24 21:21:32 crc kubenswrapper[4801]: I1124 21:21:32.281775 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" event={"ID":"5944a8e8-b2b1-4b07-8009-daa3215b3612","Type":"ContainerStarted","Data":"7df2b0c22e43a9967c02afd551c8fdeea0f4b7caca6f0ee26d0ee5c51585ca81"} Nov 24 21:21:32 crc kubenswrapper[4801]: I1124 21:21:32.285489 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6675df4db6-8rtvb" event={"ID":"960ccabe-478a-4c38-ab6b-6aa9865d05d1","Type":"ContainerStarted","Data":"f7705a05d94e40c7a0eea56eb930f6ff22cf1c1bc13f06802ac424dbf73bb5dc"} Nov 24 21:21:32 crc kubenswrapper[4801]: I1124 21:21:32.285526 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6675df4db6-8rtvb" event={"ID":"960ccabe-478a-4c38-ab6b-6aa9865d05d1","Type":"ContainerStarted","Data":"4f4200433445ad46e54e4f46e408dc805e97ad58763d699084605b6b4a2ae6be"} Nov 24 21:21:32 crc kubenswrapper[4801]: I1124 21:21:32.287170 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" event={"ID":"26c45212-f2f9-452d-adb5-64ab0fd1448b","Type":"ContainerStarted","Data":"978e59b0ec157f99836e604d111ec1de9436e68fcf3d334fc943385fcce21712"} Nov 24 21:21:32 crc kubenswrapper[4801]: I1124 21:21:32.329920 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6675df4db6-8rtvb" podStartSLOduration=2.329879671 podStartE2EDuration="2.329879671s" podCreationTimestamp="2025-11-24 21:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:32.320067937 +0000 UTC m=+864.402654627" watchObservedRunningTime="2025-11-24 21:21:32.329879671 +0000 UTC m=+864.412466391" Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.333982 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" event={"ID":"26c45212-f2f9-452d-adb5-64ab0fd1448b","Type":"ContainerStarted","Data":"16a4099828eaf31e89d6b39b432924224f673f7638bd2b31cd3ced872ebf1cb3"} Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.336330 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.338412 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-db4b2" event={"ID":"5d6ad10f-790a-49ed-aff0-13a1bd01c476","Type":"ContainerStarted","Data":"0ad0f860563325677aa5342d785cc692392fd076d5975a23a03e0875abeee6f9"} Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.338488 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.341011 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" event={"ID":"22bf0a52-c439-4e4e-95cf-f34685cc8185","Type":"ContainerStarted","Data":"6a9c468b54a000fb3baf8f1685133865ac94e9d8ff12ebde57c992c1da0f2de8"} Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.343479 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" event={"ID":"5944a8e8-b2b1-4b07-8009-daa3215b3612","Type":"ContainerStarted","Data":"4596877b19bb8756235d2873f6a88803e2a52b04a3f0aec284673d4eae76bfc1"} Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.358758 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" podStartSLOduration=2.239264899 podStartE2EDuration="5.35873169s" podCreationTimestamp="2025-11-24 21:21:30 +0000 UTC" firstStartedPulling="2025-11-24 21:21:31.379293367 +0000 UTC m=+863.461880037" lastFinishedPulling="2025-11-24 21:21:34.498760168 +0000 UTC m=+866.581346828" observedRunningTime="2025-11-24 21:21:35.358449322 +0000 UTC m=+867.441035992" watchObservedRunningTime="2025-11-24 21:21:35.35873169 +0000 UTC m=+867.441318370" Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.381527 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-db4b2" podStartSLOduration=1.6718211429999998 podStartE2EDuration="5.381506545s" podCreationTimestamp="2025-11-24 21:21:30 +0000 UTC" firstStartedPulling="2025-11-24 21:21:30.813981596 +0000 UTC m=+862.896568266" lastFinishedPulling="2025-11-24 21:21:34.523666998 +0000 UTC m=+866.606253668" observedRunningTime="2025-11-24 21:21:35.375643994 +0000 UTC m=+867.458230674" watchObservedRunningTime="2025-11-24 21:21:35.381506545 +0000 UTC m=+867.464093235" Nov 24 21:21:35 crc kubenswrapper[4801]: I1124 21:21:35.405471 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hrx9b" podStartSLOduration=2.805388315 podStartE2EDuration="5.405447255s" podCreationTimestamp="2025-11-24 21:21:30 +0000 UTC" firstStartedPulling="2025-11-24 21:21:31.895598682 +0000 UTC m=+863.978185352" lastFinishedPulling="2025-11-24 21:21:34.495657622 +0000 UTC m=+866.578244292" observedRunningTime="2025-11-24 21:21:35.394258319 +0000 UTC m=+867.476844989" watchObservedRunningTime="2025-11-24 21:21:35.405447255 +0000 UTC m=+867.488033925" Nov 24 21:21:37 crc kubenswrapper[4801]: I1124 21:21:37.365010 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" event={"ID":"22bf0a52-c439-4e4e-95cf-f34685cc8185","Type":"ContainerStarted","Data":"412d1b47ee2f53953c775c0635087e351ab65816d26d9fa3ae47efa353c35eef"} Nov 24 21:21:37 crc kubenswrapper[4801]: I1124 21:21:37.393602 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xxkcq" podStartSLOduration=1.4926412519999999 podStartE2EDuration="7.393573012s" podCreationTimestamp="2025-11-24 21:21:30 +0000 UTC" firstStartedPulling="2025-11-24 21:21:31.272049931 +0000 UTC m=+863.354636601" lastFinishedPulling="2025-11-24 21:21:37.172981691 +0000 UTC m=+869.255568361" observedRunningTime="2025-11-24 21:21:37.383050267 +0000 UTC m=+869.465636937" watchObservedRunningTime="2025-11-24 21:21:37.393573012 +0000 UTC m=+869.476159692" Nov 24 21:21:40 crc kubenswrapper[4801]: I1124 21:21:40.790080 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-db4b2" Nov 24 21:21:41 crc kubenswrapper[4801]: I1124 21:21:41.093646 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:41 crc kubenswrapper[4801]: I1124 21:21:41.093925 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:41 crc kubenswrapper[4801]: I1124 21:21:41.101566 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:41 crc kubenswrapper[4801]: I1124 21:21:41.397622 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:21:41 crc kubenswrapper[4801]: I1124 21:21:41.466947 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b8bf4964f-q79rb"] Nov 24 21:21:50 crc kubenswrapper[4801]: I1124 21:21:50.715999 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-4m2h8" Nov 24 21:21:54 crc kubenswrapper[4801]: I1124 21:21:54.320253 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:21:54 crc kubenswrapper[4801]: I1124 21:21:54.321214 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:22:06 crc kubenswrapper[4801]: I1124 21:22:06.535655 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b8bf4964f-q79rb" podUID="74e357cb-afbe-47dd-96a2-9e4e703481cd" containerName="console" containerID="cri-o://fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299" gracePeriod=15 Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.010296 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b8bf4964f-q79rb_74e357cb-afbe-47dd-96a2-9e4e703481cd/console/0.log" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.010826 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.035514 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-serving-cert\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.035624 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-config\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.035754 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-trusted-ca-bundle\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.035832 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdgk\" (UniqueName: \"kubernetes.io/projected/74e357cb-afbe-47dd-96a2-9e4e703481cd-kube-api-access-6zdgk\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.035915 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-oauth-serving-cert\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.035950 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-oauth-config\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.036058 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-service-ca\") pod \"74e357cb-afbe-47dd-96a2-9e4e703481cd\" (UID: \"74e357cb-afbe-47dd-96a2-9e4e703481cd\") " Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.042689 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.044436 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-service-ca" (OuterVolumeSpecName: "service-ca") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.047781 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.051404 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.051506 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e357cb-afbe-47dd-96a2-9e4e703481cd-kube-api-access-6zdgk" (OuterVolumeSpecName: "kube-api-access-6zdgk") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "kube-api-access-6zdgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.051937 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.053094 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-config" (OuterVolumeSpecName: "console-config") pod "74e357cb-afbe-47dd-96a2-9e4e703481cd" (UID: "74e357cb-afbe-47dd-96a2-9e4e703481cd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.143974 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.144041 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.144056 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.144069 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdgk\" (UniqueName: \"kubernetes.io/projected/74e357cb-afbe-47dd-96a2-9e4e703481cd-kube-api-access-6zdgk\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.144085 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.144097 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74e357cb-afbe-47dd-96a2-9e4e703481cd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.144110 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74e357cb-afbe-47dd-96a2-9e4e703481cd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.699159 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b8bf4964f-q79rb_74e357cb-afbe-47dd-96a2-9e4e703481cd/console/0.log" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.699218 4801 generic.go:334] "Generic (PLEG): container finished" podID="74e357cb-afbe-47dd-96a2-9e4e703481cd" containerID="fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299" exitCode=2 Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.699248 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b8bf4964f-q79rb" event={"ID":"74e357cb-afbe-47dd-96a2-9e4e703481cd","Type":"ContainerDied","Data":"fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299"} Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.699272 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b8bf4964f-q79rb" event={"ID":"74e357cb-afbe-47dd-96a2-9e4e703481cd","Type":"ContainerDied","Data":"47d1068609f469e4942c1fec3512846cbd5a5d65f477e1f02f80c4898f14913a"} Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.699290 4801 scope.go:117] "RemoveContainer" containerID="fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.699350 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b8bf4964f-q79rb" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.726905 4801 scope.go:117] "RemoveContainer" containerID="fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299" Nov 24 21:22:07 crc kubenswrapper[4801]: E1124 21:22:07.727913 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299\": container with ID starting with fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299 not found: ID does not exist" containerID="fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.727956 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299"} err="failed to get container status \"fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299\": rpc error: code = NotFound desc = could not find container \"fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299\": container with ID starting with fb5b4b18a85701f64417c143a03af5869632cc78dc8eec51bbdc8c4621b07299 not found: ID does not exist" Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.744282 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b8bf4964f-q79rb"] Nov 24 21:22:07 crc kubenswrapper[4801]: I1124 21:22:07.752084 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b8bf4964f-q79rb"] Nov 24 21:22:08 crc kubenswrapper[4801]: I1124 21:22:08.675245 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e357cb-afbe-47dd-96a2-9e4e703481cd" path="/var/lib/kubelet/pods/74e357cb-afbe-47dd-96a2-9e4e703481cd/volumes" Nov 24 21:22:10 crc kubenswrapper[4801]: I1124 21:22:10.933508 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj"] Nov 24 21:22:10 crc kubenswrapper[4801]: E1124 21:22:10.934339 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e357cb-afbe-47dd-96a2-9e4e703481cd" containerName="console" Nov 24 21:22:10 crc kubenswrapper[4801]: I1124 21:22:10.934352 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e357cb-afbe-47dd-96a2-9e4e703481cd" containerName="console" Nov 24 21:22:10 crc kubenswrapper[4801]: I1124 21:22:10.934538 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e357cb-afbe-47dd-96a2-9e4e703481cd" containerName="console" Nov 24 21:22:10 crc kubenswrapper[4801]: I1124 21:22:10.935672 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:10 crc kubenswrapper[4801]: I1124 21:22:10.937753 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:22:10 crc kubenswrapper[4801]: I1124 21:22:10.947696 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj"] Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.083068 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.083629 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.083663 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq56v\" (UniqueName: \"kubernetes.io/projected/d56ac8f4-6c0b-4d87-b191-392103a75b60-kube-api-access-kq56v\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.187697 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.187784 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.187820 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq56v\" (UniqueName: \"kubernetes.io/projected/d56ac8f4-6c0b-4d87-b191-392103a75b60-kube-api-access-kq56v\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.188254 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.188260 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.208025 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq56v\" (UniqueName: \"kubernetes.io/projected/d56ac8f4-6c0b-4d87-b191-392103a75b60-kube-api-access-kq56v\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.289733 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:11 crc kubenswrapper[4801]: I1124 21:22:11.735283 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj"] Nov 24 21:22:11 crc kubenswrapper[4801]: W1124 21:22:11.742480 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56ac8f4_6c0b_4d87_b191_392103a75b60.slice/crio-1824fe8b81d1f440d25950ba4e440bf049523688712d41b8e60eec9587a0e31b WatchSource:0}: Error finding container 1824fe8b81d1f440d25950ba4e440bf049523688712d41b8e60eec9587a0e31b: Status 404 returned error can't find the container with id 1824fe8b81d1f440d25950ba4e440bf049523688712d41b8e60eec9587a0e31b Nov 24 21:22:12 crc kubenswrapper[4801]: I1124 21:22:12.744111 4801 generic.go:334] "Generic (PLEG): container finished" podID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerID="582a1ab743fbf4e2aa1c9fd16c88a5e1d21f138a2e409bcccb9ccd5e6dd29d08" exitCode=0 Nov 24 21:22:12 crc kubenswrapper[4801]: I1124 21:22:12.744243 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" event={"ID":"d56ac8f4-6c0b-4d87-b191-392103a75b60","Type":"ContainerDied","Data":"582a1ab743fbf4e2aa1c9fd16c88a5e1d21f138a2e409bcccb9ccd5e6dd29d08"} Nov 24 21:22:12 crc kubenswrapper[4801]: I1124 21:22:12.744551 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" event={"ID":"d56ac8f4-6c0b-4d87-b191-392103a75b60","Type":"ContainerStarted","Data":"1824fe8b81d1f440d25950ba4e440bf049523688712d41b8e60eec9587a0e31b"} Nov 24 21:22:12 crc kubenswrapper[4801]: I1124 21:22:12.748063 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:22:14 crc kubenswrapper[4801]: I1124 21:22:14.768948 4801 generic.go:334] "Generic (PLEG): container finished" podID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerID="6dfeddcb5ea8be309e2a4260cfc62d1555eba58d0c24525a941672e307c1a706" exitCode=0 Nov 24 21:22:14 crc kubenswrapper[4801]: I1124 21:22:14.769029 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" event={"ID":"d56ac8f4-6c0b-4d87-b191-392103a75b60","Type":"ContainerDied","Data":"6dfeddcb5ea8be309e2a4260cfc62d1555eba58d0c24525a941672e307c1a706"} Nov 24 21:22:15 crc kubenswrapper[4801]: I1124 21:22:15.779879 4801 generic.go:334] "Generic (PLEG): container finished" podID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerID="6f0fbdc718d2500d8f4e5d3230b875433e499706bbdf8e1551ed9afcb635cca4" exitCode=0 Nov 24 21:22:15 crc kubenswrapper[4801]: I1124 21:22:15.779968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" event={"ID":"d56ac8f4-6c0b-4d87-b191-392103a75b60","Type":"ContainerDied","Data":"6f0fbdc718d2500d8f4e5d3230b875433e499706bbdf8e1551ed9afcb635cca4"} Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.032819 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.117675 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq56v\" (UniqueName: \"kubernetes.io/projected/d56ac8f4-6c0b-4d87-b191-392103a75b60-kube-api-access-kq56v\") pod \"d56ac8f4-6c0b-4d87-b191-392103a75b60\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.117738 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-util\") pod \"d56ac8f4-6c0b-4d87-b191-392103a75b60\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.117776 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-bundle\") pod \"d56ac8f4-6c0b-4d87-b191-392103a75b60\" (UID: \"d56ac8f4-6c0b-4d87-b191-392103a75b60\") " Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.119001 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-bundle" (OuterVolumeSpecName: "bundle") pod "d56ac8f4-6c0b-4d87-b191-392103a75b60" (UID: "d56ac8f4-6c0b-4d87-b191-392103a75b60"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.124784 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56ac8f4-6c0b-4d87-b191-392103a75b60-kube-api-access-kq56v" (OuterVolumeSpecName: "kube-api-access-kq56v") pod "d56ac8f4-6c0b-4d87-b191-392103a75b60" (UID: "d56ac8f4-6c0b-4d87-b191-392103a75b60"). InnerVolumeSpecName "kube-api-access-kq56v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.131847 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-util" (OuterVolumeSpecName: "util") pod "d56ac8f4-6c0b-4d87-b191-392103a75b60" (UID: "d56ac8f4-6c0b-4d87-b191-392103a75b60"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.219687 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq56v\" (UniqueName: \"kubernetes.io/projected/d56ac8f4-6c0b-4d87-b191-392103a75b60-kube-api-access-kq56v\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.219732 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.219742 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d56ac8f4-6c0b-4d87-b191-392103a75b60-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.799059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" event={"ID":"d56ac8f4-6c0b-4d87-b191-392103a75b60","Type":"ContainerDied","Data":"1824fe8b81d1f440d25950ba4e440bf049523688712d41b8e60eec9587a0e31b"} Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.799115 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1824fe8b81d1f440d25950ba4e440bf049523688712d41b8e60eec9587a0e31b" Nov 24 21:22:17 crc kubenswrapper[4801]: I1124 21:22:17.799186 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj" Nov 24 21:22:24 crc kubenswrapper[4801]: I1124 21:22:24.319864 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:22:24 crc kubenswrapper[4801]: I1124 21:22:24.321700 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.665855 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf"] Nov 24 21:22:25 crc kubenswrapper[4801]: E1124 21:22:25.666618 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="pull" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.666630 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="pull" Nov 24 21:22:25 crc kubenswrapper[4801]: E1124 21:22:25.666656 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="util" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.666662 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="util" Nov 24 21:22:25 crc kubenswrapper[4801]: E1124 21:22:25.666673 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="extract" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.666681 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="extract" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.666819 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56ac8f4-6c0b-4d87-b191-392103a75b60" containerName="extract" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.667446 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.668753 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d5zhv" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.668944 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.669669 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.673249 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.676020 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf"] Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.678454 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.776053 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtd8\" (UniqueName: \"kubernetes.io/projected/b7735a1b-89b9-42e8-9f1d-8cae09154876-kube-api-access-4xtd8\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.776103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7735a1b-89b9-42e8-9f1d-8cae09154876-apiservice-cert\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.776172 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7735a1b-89b9-42e8-9f1d-8cae09154876-webhook-cert\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.877869 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtd8\" (UniqueName: \"kubernetes.io/projected/b7735a1b-89b9-42e8-9f1d-8cae09154876-kube-api-access-4xtd8\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.877930 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7735a1b-89b9-42e8-9f1d-8cae09154876-apiservice-cert\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.877993 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7735a1b-89b9-42e8-9f1d-8cae09154876-webhook-cert\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.897964 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7735a1b-89b9-42e8-9f1d-8cae09154876-webhook-cert\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.897998 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7735a1b-89b9-42e8-9f1d-8cae09154876-apiservice-cert\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.908424 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtd8\" (UniqueName: \"kubernetes.io/projected/b7735a1b-89b9-42e8-9f1d-8cae09154876-kube-api-access-4xtd8\") pod \"metallb-operator-controller-manager-788b7684b7-wx8kf\" (UID: \"b7735a1b-89b9-42e8-9f1d-8cae09154876\") " pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.938061 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq"] Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.939060 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.941642 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.942914 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.942982 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sdsd2" Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.962557 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq"] Nov 24 21:22:25 crc kubenswrapper[4801]: I1124 21:22:25.989193 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.081048 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0475a07b-3963-4a4b-af67-4db99502d8a4-apiservice-cert\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.081595 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0475a07b-3963-4a4b-af67-4db99502d8a4-webhook-cert\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.081631 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjg8l\" (UniqueName: \"kubernetes.io/projected/0475a07b-3963-4a4b-af67-4db99502d8a4-kube-api-access-bjg8l\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.183784 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0475a07b-3963-4a4b-af67-4db99502d8a4-webhook-cert\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.183845 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjg8l\" (UniqueName: \"kubernetes.io/projected/0475a07b-3963-4a4b-af67-4db99502d8a4-kube-api-access-bjg8l\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.183944 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0475a07b-3963-4a4b-af67-4db99502d8a4-apiservice-cert\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.192965 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0475a07b-3963-4a4b-af67-4db99502d8a4-apiservice-cert\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.195080 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0475a07b-3963-4a4b-af67-4db99502d8a4-webhook-cert\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.224388 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjg8l\" (UniqueName: \"kubernetes.io/projected/0475a07b-3963-4a4b-af67-4db99502d8a4-kube-api-access-bjg8l\") pod \"metallb-operator-webhook-server-57ddf9c5d7-jfttq\" (UID: \"0475a07b-3963-4a4b-af67-4db99502d8a4\") " pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.269810 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.589013 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf"] Nov 24 21:22:26 crc kubenswrapper[4801]: W1124 21:22:26.599564 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7735a1b_89b9_42e8_9f1d_8cae09154876.slice/crio-37035f884ec87df39145f04b47f475e03627c621478e8b0046409f0279646830 WatchSource:0}: Error finding container 37035f884ec87df39145f04b47f475e03627c621478e8b0046409f0279646830: Status 404 returned error can't find the container with id 37035f884ec87df39145f04b47f475e03627c621478e8b0046409f0279646830 Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.815870 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq"] Nov 24 21:22:26 crc kubenswrapper[4801]: W1124 21:22:26.820748 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0475a07b_3963_4a4b_af67_4db99502d8a4.slice/crio-b5e00cfc8b2956144aafdcd9cec7a7fab1e5935ec7f00130f4931bd1c711df8f WatchSource:0}: Error finding container b5e00cfc8b2956144aafdcd9cec7a7fab1e5935ec7f00130f4931bd1c711df8f: Status 404 returned error can't find the container with id b5e00cfc8b2956144aafdcd9cec7a7fab1e5935ec7f00130f4931bd1c711df8f Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.862325 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" event={"ID":"b7735a1b-89b9-42e8-9f1d-8cae09154876","Type":"ContainerStarted","Data":"37035f884ec87df39145f04b47f475e03627c621478e8b0046409f0279646830"} Nov 24 21:22:26 crc kubenswrapper[4801]: I1124 21:22:26.863667 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" event={"ID":"0475a07b-3963-4a4b-af67-4db99502d8a4","Type":"ContainerStarted","Data":"b5e00cfc8b2956144aafdcd9cec7a7fab1e5935ec7f00130f4931bd1c711df8f"} Nov 24 21:22:30 crc kubenswrapper[4801]: I1124 21:22:30.901140 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" event={"ID":"b7735a1b-89b9-42e8-9f1d-8cae09154876","Type":"ContainerStarted","Data":"93c4aa8099cb88ff6a684004b55f4c9840f0e6ac1ac03c1d92be55c990836158"} Nov 24 21:22:30 crc kubenswrapper[4801]: I1124 21:22:30.901957 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:22:30 crc kubenswrapper[4801]: I1124 21:22:30.929824 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" podStartSLOduration=2.7813183329999998 podStartE2EDuration="5.929800261s" podCreationTimestamp="2025-11-24 21:22:25 +0000 UTC" firstStartedPulling="2025-11-24 21:22:26.601952284 +0000 UTC m=+918.684538954" lastFinishedPulling="2025-11-24 21:22:29.750434212 +0000 UTC m=+921.833020882" observedRunningTime="2025-11-24 21:22:30.924390494 +0000 UTC m=+923.006977164" watchObservedRunningTime="2025-11-24 21:22:30.929800261 +0000 UTC m=+923.012386931" Nov 24 21:22:32 crc kubenswrapper[4801]: I1124 21:22:32.918345 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" event={"ID":"0475a07b-3963-4a4b-af67-4db99502d8a4","Type":"ContainerStarted","Data":"673d973444029ed679fdd06b5b10147ad599ecd112c26c4ac5a81ae0b554763d"} Nov 24 21:22:32 crc kubenswrapper[4801]: I1124 21:22:32.918861 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:32 crc kubenswrapper[4801]: I1124 21:22:32.950976 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" podStartSLOduration=2.974033771 podStartE2EDuration="7.95095043s" podCreationTimestamp="2025-11-24 21:22:25 +0000 UTC" firstStartedPulling="2025-11-24 21:22:26.828280222 +0000 UTC m=+918.910866892" lastFinishedPulling="2025-11-24 21:22:31.805196891 +0000 UTC m=+923.887783551" observedRunningTime="2025-11-24 21:22:32.945089008 +0000 UTC m=+925.027675678" watchObservedRunningTime="2025-11-24 21:22:32.95095043 +0000 UTC m=+925.033537100" Nov 24 21:22:46 crc kubenswrapper[4801]: I1124 21:22:46.274796 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-57ddf9c5d7-jfttq" Nov 24 21:22:54 crc kubenswrapper[4801]: I1124 21:22:54.320766 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:22:54 crc kubenswrapper[4801]: I1124 21:22:54.321349 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:22:54 crc kubenswrapper[4801]: I1124 21:22:54.321418 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:22:54 crc kubenswrapper[4801]: I1124 21:22:54.322152 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8919cd9122ea8e468b9aac6663ba78df15883fda40442e460d8b6e6a81f4e98c"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:22:54 crc kubenswrapper[4801]: I1124 21:22:54.322224 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://8919cd9122ea8e468b9aac6663ba78df15883fda40442e460d8b6e6a81f4e98c" gracePeriod=600 Nov 24 21:22:55 crc kubenswrapper[4801]: I1124 21:22:55.096295 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="8919cd9122ea8e468b9aac6663ba78df15883fda40442e460d8b6e6a81f4e98c" exitCode=0 Nov 24 21:22:55 crc kubenswrapper[4801]: I1124 21:22:55.096351 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"8919cd9122ea8e468b9aac6663ba78df15883fda40442e460d8b6e6a81f4e98c"} Nov 24 21:22:55 crc kubenswrapper[4801]: I1124 21:22:55.097329 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"26dfa37555d46211186d9faaf879ca9711d8f0944f7938e017dd598eb1c35e3b"} Nov 24 21:22:55 crc kubenswrapper[4801]: I1124 21:22:55.097401 4801 scope.go:117] "RemoveContainer" containerID="9a0d6aeedfbe81cd46691ce54a719da9e45f8039e5616fd23cf3d03c59f4c218" Nov 24 21:23:05 crc kubenswrapper[4801]: I1124 21:23:05.992959 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-788b7684b7-wx8kf" Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.912857 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-8slp7"] Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.920808 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.924599 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.924743 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hrf86" Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.948825 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zzpmw"] Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.977169 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-8slp7"] Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.977400 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.982845 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 21:23:06 crc kubenswrapper[4801]: I1124 21:23:06.983197 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.059076 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-t96bl"] Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.061073 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.067064 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9dztp" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.067338 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.067489 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.067619 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.080017 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-gvfsb"] Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.081559 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.087064 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.090485 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-gvfsb"] Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.110870 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5953034-4614-44bc-8d2a-5f2a2a6b37d0-cert\") pod \"frr-k8s-webhook-server-6998585d5-8slp7\" (UID: \"d5953034-4614-44bc-8d2a-5f2a2a6b37d0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.110935 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvsq\" (UniqueName: \"kubernetes.io/projected/bf966c99-b331-4a64-a336-3e44898b4068-kube-api-access-8pvsq\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.110963 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscw7\" (UniqueName: \"kubernetes.io/projected/d5953034-4614-44bc-8d2a-5f2a2a6b37d0-kube-api-access-vscw7\") pod \"frr-k8s-webhook-server-6998585d5-8slp7\" (UID: \"d5953034-4614-44bc-8d2a-5f2a2a6b37d0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.111047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-frr-sockets\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.111073 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf966c99-b331-4a64-a336-3e44898b4068-metrics-certs\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.111094 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-frr-conf\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.111121 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-reloader\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.111142 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-metrics\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.111165 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bf966c99-b331-4a64-a336-3e44898b4068-frr-startup\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213150 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-frr-sockets\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213207 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213237 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf966c99-b331-4a64-a336-3e44898b4068-metrics-certs\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-frr-conf\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213279 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6zt\" (UniqueName: \"kubernetes.io/projected/2618ece2-7600-4e60-add6-a7e8a2152cfe-kube-api-access-5s6zt\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213299 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-reloader\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213321 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-metrics\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213343 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-cert\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213377 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bf966c99-b331-4a64-a336-3e44898b4068-frr-startup\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213414 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-metrics-certs\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213439 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5953034-4614-44bc-8d2a-5f2a2a6b37d0-cert\") pod \"frr-k8s-webhook-server-6998585d5-8slp7\" (UID: \"d5953034-4614-44bc-8d2a-5f2a2a6b37d0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213458 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pvsq\" (UniqueName: \"kubernetes.io/projected/bf966c99-b331-4a64-a336-3e44898b4068-kube-api-access-8pvsq\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213479 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscw7\" (UniqueName: \"kubernetes.io/projected/d5953034-4614-44bc-8d2a-5f2a2a6b37d0-kube-api-access-vscw7\") pod \"frr-k8s-webhook-server-6998585d5-8slp7\" (UID: \"d5953034-4614-44bc-8d2a-5f2a2a6b37d0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213512 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-metrics-certs\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213535 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtjt\" (UniqueName: \"kubernetes.io/projected/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-kube-api-access-wgtjt\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2618ece2-7600-4e60-add6-a7e8a2152cfe-metallb-excludel2\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.213719 4801 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.213770 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf966c99-b331-4a64-a336-3e44898b4068-metrics-certs podName:bf966c99-b331-4a64-a336-3e44898b4068 nodeName:}" failed. No retries permitted until 2025-11-24 21:23:07.713752578 +0000 UTC m=+959.796339248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf966c99-b331-4a64-a336-3e44898b4068-metrics-certs") pod "frr-k8s-zzpmw" (UID: "bf966c99-b331-4a64-a336-3e44898b4068") : secret "frr-k8s-certs-secret" not found Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.213805 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-frr-sockets\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.214079 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-frr-conf\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.214312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-reloader\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.214543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bf966c99-b331-4a64-a336-3e44898b4068-metrics\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.215307 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bf966c99-b331-4a64-a336-3e44898b4068-frr-startup\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.220074 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5953034-4614-44bc-8d2a-5f2a2a6b37d0-cert\") pod \"frr-k8s-webhook-server-6998585d5-8slp7\" (UID: \"d5953034-4614-44bc-8d2a-5f2a2a6b37d0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.234238 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pvsq\" (UniqueName: \"kubernetes.io/projected/bf966c99-b331-4a64-a336-3e44898b4068-kube-api-access-8pvsq\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.236857 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscw7\" (UniqueName: \"kubernetes.io/projected/d5953034-4614-44bc-8d2a-5f2a2a6b37d0-kube-api-access-vscw7\") pod \"frr-k8s-webhook-server-6998585d5-8slp7\" (UID: \"d5953034-4614-44bc-8d2a-5f2a2a6b37d0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.243669 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.314965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-metrics-certs\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.315026 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtjt\" (UniqueName: \"kubernetes.io/projected/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-kube-api-access-wgtjt\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.315076 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2618ece2-7600-4e60-add6-a7e8a2152cfe-metallb-excludel2\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.315117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.315168 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6zt\" (UniqueName: \"kubernetes.io/projected/2618ece2-7600-4e60-add6-a7e8a2152cfe-kube-api-access-5s6zt\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.315197 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-cert\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.315225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-metrics-certs\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.315421 4801 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.315479 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-metrics-certs podName:f522ffc9-fe47-42ac-bc8e-2fdba3fbd404 nodeName:}" failed. No retries permitted until 2025-11-24 21:23:07.815460642 +0000 UTC m=+959.898047312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-metrics-certs") pod "controller-6c7b4b5f48-gvfsb" (UID: "f522ffc9-fe47-42ac-bc8e-2fdba3fbd404") : secret "controller-certs-secret" not found Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.315638 4801 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.315662 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist podName:2618ece2-7600-4e60-add6-a7e8a2152cfe nodeName:}" failed. No retries permitted until 2025-11-24 21:23:07.815655788 +0000 UTC m=+959.898242458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist") pod "speaker-t96bl" (UID: "2618ece2-7600-4e60-add6-a7e8a2152cfe") : secret "metallb-memberlist" not found Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.316466 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2618ece2-7600-4e60-add6-a7e8a2152cfe-metallb-excludel2\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.322023 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-metrics-certs\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.323966 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-cert\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.345215 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6zt\" (UniqueName: \"kubernetes.io/projected/2618ece2-7600-4e60-add6-a7e8a2152cfe-kube-api-access-5s6zt\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.347156 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtjt\" (UniqueName: \"kubernetes.io/projected/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-kube-api-access-wgtjt\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.728211 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf966c99-b331-4a64-a336-3e44898b4068-metrics-certs\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.733319 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf966c99-b331-4a64-a336-3e44898b4068-metrics-certs\") pod \"frr-k8s-zzpmw\" (UID: \"bf966c99-b331-4a64-a336-3e44898b4068\") " pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.758745 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-8slp7"] Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.830889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.830982 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-metrics-certs\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.831136 4801 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 21:23:07 crc kubenswrapper[4801]: E1124 21:23:07.831219 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist podName:2618ece2-7600-4e60-add6-a7e8a2152cfe nodeName:}" failed. No retries permitted until 2025-11-24 21:23:08.8311955 +0000 UTC m=+960.913782170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist") pod "speaker-t96bl" (UID: "2618ece2-7600-4e60-add6-a7e8a2152cfe") : secret "metallb-memberlist" not found Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.836579 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f522ffc9-fe47-42ac-bc8e-2fdba3fbd404-metrics-certs\") pod \"controller-6c7b4b5f48-gvfsb\" (UID: \"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404\") " pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:07 crc kubenswrapper[4801]: I1124 21:23:07.899691 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.044112 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.215565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"f6ab5355dda93f4da38347b42b9a830a8b13b2ddcaa2d7503a5c36ded0499c06"} Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.216675 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" event={"ID":"d5953034-4614-44bc-8d2a-5f2a2a6b37d0","Type":"ContainerStarted","Data":"b6c8b9e788462b13c83f4967564ac5d7fc7e85e63b146a3854886c7ced3d60cc"} Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.491506 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-gvfsb"] Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.860029 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.866749 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2618ece2-7600-4e60-add6-a7e8a2152cfe-memberlist\") pod \"speaker-t96bl\" (UID: \"2618ece2-7600-4e60-add6-a7e8a2152cfe\") " pod="metallb-system/speaker-t96bl" Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.887058 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9dztp" Nov 24 21:23:08 crc kubenswrapper[4801]: I1124 21:23:08.895708 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t96bl" Nov 24 21:23:08 crc kubenswrapper[4801]: W1124 21:23:08.963233 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2618ece2_7600_4e60_add6_a7e8a2152cfe.slice/crio-0608e70534b643cbca6a8bdd9415b8c2cd5b30a1f03b8800555b99ce148ff169 WatchSource:0}: Error finding container 0608e70534b643cbca6a8bdd9415b8c2cd5b30a1f03b8800555b99ce148ff169: Status 404 returned error can't find the container with id 0608e70534b643cbca6a8bdd9415b8c2cd5b30a1f03b8800555b99ce148ff169 Nov 24 21:23:09 crc kubenswrapper[4801]: I1124 21:23:09.232759 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-gvfsb" event={"ID":"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404","Type":"ContainerStarted","Data":"7ff4829377a746417bcdd92635a5b0777032854bc73c39e39d4821b368aa560e"} Nov 24 21:23:09 crc kubenswrapper[4801]: I1124 21:23:09.232817 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-gvfsb" event={"ID":"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404","Type":"ContainerStarted","Data":"878cc98a42e23d65f4f4c95afb68edb32c5191e8a887193430a2d280c52c96bb"} Nov 24 21:23:09 crc kubenswrapper[4801]: I1124 21:23:09.232830 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-gvfsb" event={"ID":"f522ffc9-fe47-42ac-bc8e-2fdba3fbd404","Type":"ContainerStarted","Data":"106ae60161b9ee874d1de0eb39511e09ffbb088c129a1acf556c07ba5458fd8e"} Nov 24 21:23:09 crc kubenswrapper[4801]: I1124 21:23:09.232924 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:09 crc kubenswrapper[4801]: I1124 21:23:09.243313 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t96bl" event={"ID":"2618ece2-7600-4e60-add6-a7e8a2152cfe","Type":"ContainerStarted","Data":"0608e70534b643cbca6a8bdd9415b8c2cd5b30a1f03b8800555b99ce148ff169"} Nov 24 21:23:09 crc kubenswrapper[4801]: I1124 21:23:09.260403 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-gvfsb" podStartSLOduration=2.260385535 podStartE2EDuration="2.260385535s" podCreationTimestamp="2025-11-24 21:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:23:09.257477165 +0000 UTC m=+961.340063835" watchObservedRunningTime="2025-11-24 21:23:09.260385535 +0000 UTC m=+961.342972195" Nov 24 21:23:10 crc kubenswrapper[4801]: I1124 21:23:10.255247 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t96bl" event={"ID":"2618ece2-7600-4e60-add6-a7e8a2152cfe","Type":"ContainerStarted","Data":"d5057f34e48ded1d549eac0ffeea7cacf8b1125229134cc7005d2a39209ea0eb"} Nov 24 21:23:10 crc kubenswrapper[4801]: I1124 21:23:10.255674 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t96bl" event={"ID":"2618ece2-7600-4e60-add6-a7e8a2152cfe","Type":"ContainerStarted","Data":"95476b72f6df0382d9c81d5c4c75062390670b9ee0e378edb761d69ee32a5c2d"} Nov 24 21:23:10 crc kubenswrapper[4801]: I1124 21:23:10.255751 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-t96bl" Nov 24 21:23:17 crc kubenswrapper[4801]: I1124 21:23:17.330520 4801 generic.go:334] "Generic (PLEG): container finished" podID="bf966c99-b331-4a64-a336-3e44898b4068" containerID="c53a60dda56589ab8760e4d7970db87f828478936ea86ec499ee003b7d1cbe0d" exitCode=0 Nov 24 21:23:17 crc kubenswrapper[4801]: I1124 21:23:17.331259 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerDied","Data":"c53a60dda56589ab8760e4d7970db87f828478936ea86ec499ee003b7d1cbe0d"} Nov 24 21:23:17 crc kubenswrapper[4801]: I1124 21:23:17.338187 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" event={"ID":"d5953034-4614-44bc-8d2a-5f2a2a6b37d0","Type":"ContainerStarted","Data":"9c94b89aa5641fcdeee37f072097b10ee5f538006a9b6fc0a4ea6949c6102900"} Nov 24 21:23:17 crc kubenswrapper[4801]: I1124 21:23:17.339552 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:17 crc kubenswrapper[4801]: I1124 21:23:17.372174 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-t96bl" podStartSLOduration=11.372148316 podStartE2EDuration="11.372148316s" podCreationTimestamp="2025-11-24 21:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:23:10.285137803 +0000 UTC m=+962.367724473" watchObservedRunningTime="2025-11-24 21:23:17.372148316 +0000 UTC m=+969.454734986" Nov 24 21:23:17 crc kubenswrapper[4801]: I1124 21:23:17.396497 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" podStartSLOduration=3.078120463 podStartE2EDuration="11.396474176s" podCreationTimestamp="2025-11-24 21:23:06 +0000 UTC" firstStartedPulling="2025-11-24 21:23:07.769195693 +0000 UTC m=+959.851782373" lastFinishedPulling="2025-11-24 21:23:16.087549366 +0000 UTC m=+968.170136086" observedRunningTime="2025-11-24 21:23:17.385827913 +0000 UTC m=+969.468414583" watchObservedRunningTime="2025-11-24 21:23:17.396474176 +0000 UTC m=+969.479060846" Nov 24 21:23:18 crc kubenswrapper[4801]: I1124 21:23:18.051074 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-gvfsb" Nov 24 21:23:18 crc kubenswrapper[4801]: I1124 21:23:18.351492 4801 generic.go:334] "Generic (PLEG): container finished" podID="bf966c99-b331-4a64-a336-3e44898b4068" containerID="c436991425f7afae1c33c9e2bc3da1e6ab72137c542c2f4dbecb18984074dbfd" exitCode=0 Nov 24 21:23:18 crc kubenswrapper[4801]: I1124 21:23:18.351635 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerDied","Data":"c436991425f7afae1c33c9e2bc3da1e6ab72137c542c2f4dbecb18984074dbfd"} Nov 24 21:23:19 crc kubenswrapper[4801]: I1124 21:23:19.362263 4801 generic.go:334] "Generic (PLEG): container finished" podID="bf966c99-b331-4a64-a336-3e44898b4068" containerID="b09bb80d06b0b3a55c749d73691143c3067ba5e43b40da462feb47aee688aef2" exitCode=0 Nov 24 21:23:19 crc kubenswrapper[4801]: I1124 21:23:19.362422 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerDied","Data":"b09bb80d06b0b3a55c749d73691143c3067ba5e43b40da462feb47aee688aef2"} Nov 24 21:23:20 crc kubenswrapper[4801]: I1124 21:23:20.386332 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"bc6c97a0666eba3d23ed020d32c57dfd5ff38cbb48de4fb1e1530abe2e29d915"} Nov 24 21:23:20 crc kubenswrapper[4801]: I1124 21:23:20.386836 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"6b53b1a367fc5cc8c763203fd3a8898c60ebe4b3e4d54833289df59da34f5f4d"} Nov 24 21:23:20 crc kubenswrapper[4801]: I1124 21:23:20.386850 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"b5e59b70bcb8afea69d490101c690f5899672d3e2f8429cb3d8fae9384fc9b2c"} Nov 24 21:23:20 crc kubenswrapper[4801]: I1124 21:23:20.386862 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"49dd1ab6f29c30ae77fd9b1d553236a843d34453fa7b20e25a0aba8c2c9e3197"} Nov 24 21:23:21 crc kubenswrapper[4801]: I1124 21:23:21.401512 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"ef9799229147497bd889bb81fa4ab5a1332dc68d35a3341954492bc8dc16e772"} Nov 24 21:23:22 crc kubenswrapper[4801]: I1124 21:23:22.419036 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zzpmw" event={"ID":"bf966c99-b331-4a64-a336-3e44898b4068","Type":"ContainerStarted","Data":"f2c39cd2a7b4cad3fc2326fe0f855cdb089b130eee6da1e25353c5926c2aa8b9"} Nov 24 21:23:22 crc kubenswrapper[4801]: I1124 21:23:22.419355 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:22 crc kubenswrapper[4801]: I1124 21:23:22.468892 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zzpmw" podStartSLOduration=8.370948953 podStartE2EDuration="16.468868351s" podCreationTimestamp="2025-11-24 21:23:06 +0000 UTC" firstStartedPulling="2025-11-24 21:23:08.028900003 +0000 UTC m=+960.111486683" lastFinishedPulling="2025-11-24 21:23:16.126819401 +0000 UTC m=+968.209406081" observedRunningTime="2025-11-24 21:23:22.456958459 +0000 UTC m=+974.539545129" watchObservedRunningTime="2025-11-24 21:23:22.468868351 +0000 UTC m=+974.551455021" Nov 24 21:23:22 crc kubenswrapper[4801]: I1124 21:23:22.900791 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:23 crc kubenswrapper[4801]: I1124 21:23:23.003196 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:27 crc kubenswrapper[4801]: I1124 21:23:27.250392 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-8slp7" Nov 24 21:23:28 crc kubenswrapper[4801]: I1124 21:23:28.900120 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-t96bl" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.738790 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pdh4q"] Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.740990 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.745525 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.745797 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-r6gsp" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.746173 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.762451 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pdh4q"] Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.876223 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zpj\" (UniqueName: \"kubernetes.io/projected/297cb2c3-a896-4bcc-9247-ea3e823bacd7-kube-api-access-x9zpj\") pod \"openstack-operator-index-pdh4q\" (UID: \"297cb2c3-a896-4bcc-9247-ea3e823bacd7\") " pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.978060 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zpj\" (UniqueName: \"kubernetes.io/projected/297cb2c3-a896-4bcc-9247-ea3e823bacd7-kube-api-access-x9zpj\") pod \"openstack-operator-index-pdh4q\" (UID: \"297cb2c3-a896-4bcc-9247-ea3e823bacd7\") " pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:31 crc kubenswrapper[4801]: I1124 21:23:31.998089 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zpj\" (UniqueName: \"kubernetes.io/projected/297cb2c3-a896-4bcc-9247-ea3e823bacd7-kube-api-access-x9zpj\") pod \"openstack-operator-index-pdh4q\" (UID: \"297cb2c3-a896-4bcc-9247-ea3e823bacd7\") " pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:32 crc kubenswrapper[4801]: I1124 21:23:32.080824 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:32 crc kubenswrapper[4801]: I1124 21:23:32.518704 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pdh4q"] Nov 24 21:23:33 crc kubenswrapper[4801]: I1124 21:23:33.539293 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pdh4q" event={"ID":"297cb2c3-a896-4bcc-9247-ea3e823bacd7","Type":"ContainerStarted","Data":"4c8cc09c080a079fe808049790eebc16a6ae94c5cc44beb84f228524cef5e21f"} Nov 24 21:23:34 crc kubenswrapper[4801]: I1124 21:23:34.902433 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pdh4q"] Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.515859 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w2fqv"] Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.517142 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.531988 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w2fqv"] Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.563524 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pdh4q" event={"ID":"297cb2c3-a896-4bcc-9247-ea3e823bacd7","Type":"ContainerStarted","Data":"19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58"} Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.563747 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pdh4q" podUID="297cb2c3-a896-4bcc-9247-ea3e823bacd7" containerName="registry-server" containerID="cri-o://19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58" gracePeriod=2 Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.591882 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pdh4q" podStartSLOduration=2.195254591 podStartE2EDuration="4.591805558s" podCreationTimestamp="2025-11-24 21:23:31 +0000 UTC" firstStartedPulling="2025-11-24 21:23:32.5266049 +0000 UTC m=+984.609191570" lastFinishedPulling="2025-11-24 21:23:34.923155867 +0000 UTC m=+987.005742537" observedRunningTime="2025-11-24 21:23:35.582055884 +0000 UTC m=+987.664642564" watchObservedRunningTime="2025-11-24 21:23:35.591805558 +0000 UTC m=+987.674392238" Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.663257 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxvn\" (UniqueName: \"kubernetes.io/projected/d5b5e147-92f8-4f47-995b-5e1710b5e593-kube-api-access-4xxvn\") pod \"openstack-operator-index-w2fqv\" (UID: \"d5b5e147-92f8-4f47-995b-5e1710b5e593\") " pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.764601 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxvn\" (UniqueName: \"kubernetes.io/projected/d5b5e147-92f8-4f47-995b-5e1710b5e593-kube-api-access-4xxvn\") pod \"openstack-operator-index-w2fqv\" (UID: \"d5b5e147-92f8-4f47-995b-5e1710b5e593\") " pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.804560 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxvn\" (UniqueName: \"kubernetes.io/projected/d5b5e147-92f8-4f47-995b-5e1710b5e593-kube-api-access-4xxvn\") pod \"openstack-operator-index-w2fqv\" (UID: \"d5b5e147-92f8-4f47-995b-5e1710b5e593\") " pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:35 crc kubenswrapper[4801]: I1124 21:23:35.843672 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.079278 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.274012 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9zpj\" (UniqueName: \"kubernetes.io/projected/297cb2c3-a896-4bcc-9247-ea3e823bacd7-kube-api-access-x9zpj\") pod \"297cb2c3-a896-4bcc-9247-ea3e823bacd7\" (UID: \"297cb2c3-a896-4bcc-9247-ea3e823bacd7\") " Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.280197 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297cb2c3-a896-4bcc-9247-ea3e823bacd7-kube-api-access-x9zpj" (OuterVolumeSpecName: "kube-api-access-x9zpj") pod "297cb2c3-a896-4bcc-9247-ea3e823bacd7" (UID: "297cb2c3-a896-4bcc-9247-ea3e823bacd7"). InnerVolumeSpecName "kube-api-access-x9zpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.333861 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w2fqv"] Nov 24 21:23:36 crc kubenswrapper[4801]: W1124 21:23:36.337305 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5b5e147_92f8_4f47_995b_5e1710b5e593.slice/crio-8d444f11c0158ee7c18372f5e4b94e7a83f4d4fc85beda48cab14d5ee0d05e15 WatchSource:0}: Error finding container 8d444f11c0158ee7c18372f5e4b94e7a83f4d4fc85beda48cab14d5ee0d05e15: Status 404 returned error can't find the container with id 8d444f11c0158ee7c18372f5e4b94e7a83f4d4fc85beda48cab14d5ee0d05e15 Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.376569 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9zpj\" (UniqueName: \"kubernetes.io/projected/297cb2c3-a896-4bcc-9247-ea3e823bacd7-kube-api-access-x9zpj\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.579415 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w2fqv" event={"ID":"d5b5e147-92f8-4f47-995b-5e1710b5e593","Type":"ContainerStarted","Data":"8d444f11c0158ee7c18372f5e4b94e7a83f4d4fc85beda48cab14d5ee0d05e15"} Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.584462 4801 generic.go:334] "Generic (PLEG): container finished" podID="297cb2c3-a896-4bcc-9247-ea3e823bacd7" containerID="19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58" exitCode=0 Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.584539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pdh4q" event={"ID":"297cb2c3-a896-4bcc-9247-ea3e823bacd7","Type":"ContainerDied","Data":"19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58"} Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.584589 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pdh4q" event={"ID":"297cb2c3-a896-4bcc-9247-ea3e823bacd7","Type":"ContainerDied","Data":"4c8cc09c080a079fe808049790eebc16a6ae94c5cc44beb84f228524cef5e21f"} Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.584640 4801 scope.go:117] "RemoveContainer" containerID="19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.584828 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pdh4q" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.610021 4801 scope.go:117] "RemoveContainer" containerID="19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58" Nov 24 21:23:36 crc kubenswrapper[4801]: E1124 21:23:36.610748 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58\": container with ID starting with 19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58 not found: ID does not exist" containerID="19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.610833 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58"} err="failed to get container status \"19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58\": rpc error: code = NotFound desc = could not find container \"19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58\": container with ID starting with 19eb5e8b4fdc90c4d3311fcdd1f792d4bca2086a3c3294f87ed6a450fd061d58 not found: ID does not exist" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.614170 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w2fqv" podStartSLOduration=1.563705831 podStartE2EDuration="1.614145107s" podCreationTimestamp="2025-11-24 21:23:35 +0000 UTC" firstStartedPulling="2025-11-24 21:23:36.341868406 +0000 UTC m=+988.424455076" lastFinishedPulling="2025-11-24 21:23:36.392307682 +0000 UTC m=+988.474894352" observedRunningTime="2025-11-24 21:23:36.596882668 +0000 UTC m=+988.679469358" watchObservedRunningTime="2025-11-24 21:23:36.614145107 +0000 UTC m=+988.696731777" Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.625273 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pdh4q"] Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.634535 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pdh4q"] Nov 24 21:23:36 crc kubenswrapper[4801]: I1124 21:23:36.680397 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297cb2c3-a896-4bcc-9247-ea3e823bacd7" path="/var/lib/kubelet/pods/297cb2c3-a896-4bcc-9247-ea3e823bacd7/volumes" Nov 24 21:23:37 crc kubenswrapper[4801]: I1124 21:23:37.598189 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w2fqv" event={"ID":"d5b5e147-92f8-4f47-995b-5e1710b5e593","Type":"ContainerStarted","Data":"18193d22efda402bd302fe25a08a9b4d9219db4f04e63e19603cf6c130e3f8ff"} Nov 24 21:23:37 crc kubenswrapper[4801]: I1124 21:23:37.906106 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zzpmw" Nov 24 21:23:45 crc kubenswrapper[4801]: I1124 21:23:45.852594 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:45 crc kubenswrapper[4801]: I1124 21:23:45.855754 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:45 crc kubenswrapper[4801]: I1124 21:23:45.883303 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:46 crc kubenswrapper[4801]: I1124 21:23:46.730589 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-w2fqv" Nov 24 21:23:47 crc kubenswrapper[4801]: I1124 21:23:47.976453 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c"] Nov 24 21:23:47 crc kubenswrapper[4801]: E1124 21:23:47.977637 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297cb2c3-a896-4bcc-9247-ea3e823bacd7" containerName="registry-server" Nov 24 21:23:47 crc kubenswrapper[4801]: I1124 21:23:47.977678 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="297cb2c3-a896-4bcc-9247-ea3e823bacd7" containerName="registry-server" Nov 24 21:23:47 crc kubenswrapper[4801]: I1124 21:23:47.977960 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="297cb2c3-a896-4bcc-9247-ea3e823bacd7" containerName="registry-server" Nov 24 21:23:47 crc kubenswrapper[4801]: I1124 21:23:47.980186 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:47 crc kubenswrapper[4801]: I1124 21:23:47.982420 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-q8qds" Nov 24 21:23:47 crc kubenswrapper[4801]: I1124 21:23:47.999329 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c"] Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.034267 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/972dff04-e157-4f58-b501-f8a02504fb0f-kube-api-access-7fs7x\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.034382 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-util\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.034448 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-bundle\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.136745 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/972dff04-e157-4f58-b501-f8a02504fb0f-kube-api-access-7fs7x\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.136808 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-util\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.136832 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-bundle\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.137761 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-util\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.137826 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-bundle\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.159979 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/972dff04-e157-4f58-b501-f8a02504fb0f-kube-api-access-7fs7x\") pod \"9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.347161 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:48 crc kubenswrapper[4801]: I1124 21:23:48.843262 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c"] Nov 24 21:23:49 crc kubenswrapper[4801]: I1124 21:23:49.716276 4801 generic.go:334] "Generic (PLEG): container finished" podID="972dff04-e157-4f58-b501-f8a02504fb0f" containerID="e61c15cdfb97791dfd0ae0862d2bc8f1cde99cabf41bef73a3326b5bd686d5ad" exitCode=0 Nov 24 21:23:49 crc kubenswrapper[4801]: I1124 21:23:49.716343 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" event={"ID":"972dff04-e157-4f58-b501-f8a02504fb0f","Type":"ContainerDied","Data":"e61c15cdfb97791dfd0ae0862d2bc8f1cde99cabf41bef73a3326b5bd686d5ad"} Nov 24 21:23:49 crc kubenswrapper[4801]: I1124 21:23:49.716899 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" event={"ID":"972dff04-e157-4f58-b501-f8a02504fb0f","Type":"ContainerStarted","Data":"07acf7b454206d421e31a12ba93ef480384d231654f3b4e6fc6bd6ca50975ee8"} Nov 24 21:23:50 crc kubenswrapper[4801]: I1124 21:23:50.727195 4801 generic.go:334] "Generic (PLEG): container finished" podID="972dff04-e157-4f58-b501-f8a02504fb0f" containerID="62b56bb53436a008c277613f95af622424206130da4fb871db43838024998df6" exitCode=0 Nov 24 21:23:50 crc kubenswrapper[4801]: I1124 21:23:50.727289 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" event={"ID":"972dff04-e157-4f58-b501-f8a02504fb0f","Type":"ContainerDied","Data":"62b56bb53436a008c277613f95af622424206130da4fb871db43838024998df6"} Nov 24 21:23:51 crc kubenswrapper[4801]: I1124 21:23:51.745415 4801 generic.go:334] "Generic (PLEG): container finished" podID="972dff04-e157-4f58-b501-f8a02504fb0f" containerID="ea4ae9e6346d6cc382330b056a29b6784c80fa60e817bf85ff0c8dab11dcb587" exitCode=0 Nov 24 21:23:51 crc kubenswrapper[4801]: I1124 21:23:51.745546 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" event={"ID":"972dff04-e157-4f58-b501-f8a02504fb0f","Type":"ContainerDied","Data":"ea4ae9e6346d6cc382330b056a29b6784c80fa60e817bf85ff0c8dab11dcb587"} Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.263492 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.361184 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-bundle\") pod \"972dff04-e157-4f58-b501-f8a02504fb0f\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.361294 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-util\") pod \"972dff04-e157-4f58-b501-f8a02504fb0f\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.361344 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/972dff04-e157-4f58-b501-f8a02504fb0f-kube-api-access-7fs7x\") pod \"972dff04-e157-4f58-b501-f8a02504fb0f\" (UID: \"972dff04-e157-4f58-b501-f8a02504fb0f\") " Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.362277 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-bundle" (OuterVolumeSpecName: "bundle") pod "972dff04-e157-4f58-b501-f8a02504fb0f" (UID: "972dff04-e157-4f58-b501-f8a02504fb0f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.368210 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972dff04-e157-4f58-b501-f8a02504fb0f-kube-api-access-7fs7x" (OuterVolumeSpecName: "kube-api-access-7fs7x") pod "972dff04-e157-4f58-b501-f8a02504fb0f" (UID: "972dff04-e157-4f58-b501-f8a02504fb0f"). InnerVolumeSpecName "kube-api-access-7fs7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.374713 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-util" (OuterVolumeSpecName: "util") pod "972dff04-e157-4f58-b501-f8a02504fb0f" (UID: "972dff04-e157-4f58-b501-f8a02504fb0f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.464043 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/972dff04-e157-4f58-b501-f8a02504fb0f-kube-api-access-7fs7x\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.464117 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.464127 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/972dff04-e157-4f58-b501-f8a02504fb0f-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.781847 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.781862 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c" event={"ID":"972dff04-e157-4f58-b501-f8a02504fb0f","Type":"ContainerDied","Data":"07acf7b454206d421e31a12ba93ef480384d231654f3b4e6fc6bd6ca50975ee8"} Nov 24 21:23:53 crc kubenswrapper[4801]: I1124 21:23:53.782183 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07acf7b454206d421e31a12ba93ef480384d231654f3b4e6fc6bd6ca50975ee8" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.590705 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k"] Nov 24 21:23:55 crc kubenswrapper[4801]: E1124 21:23:55.591403 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="pull" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.591417 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="pull" Nov 24 21:23:55 crc kubenswrapper[4801]: E1124 21:23:55.591445 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="util" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.591451 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="util" Nov 24 21:23:55 crc kubenswrapper[4801]: E1124 21:23:55.591473 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="extract" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.591479 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="extract" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.591651 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="972dff04-e157-4f58-b501-f8a02504fb0f" containerName="extract" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.592218 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.594765 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-2vw8n" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.608784 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfz6t\" (UniqueName: \"kubernetes.io/projected/8ca4a00c-ea8a-438c-a2ac-77a3f04a3471-kube-api-access-gfz6t\") pod \"openstack-operator-controller-operator-7c4c94676-k9g5k\" (UID: \"8ca4a00c-ea8a-438c-a2ac-77a3f04a3471\") " pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.629417 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k"] Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.709546 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfz6t\" (UniqueName: \"kubernetes.io/projected/8ca4a00c-ea8a-438c-a2ac-77a3f04a3471-kube-api-access-gfz6t\") pod \"openstack-operator-controller-operator-7c4c94676-k9g5k\" (UID: \"8ca4a00c-ea8a-438c-a2ac-77a3f04a3471\") " pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.743093 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfz6t\" (UniqueName: \"kubernetes.io/projected/8ca4a00c-ea8a-438c-a2ac-77a3f04a3471-kube-api-access-gfz6t\") pod \"openstack-operator-controller-operator-7c4c94676-k9g5k\" (UID: \"8ca4a00c-ea8a-438c-a2ac-77a3f04a3471\") " pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:23:55 crc kubenswrapper[4801]: I1124 21:23:55.914379 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:23:56 crc kubenswrapper[4801]: I1124 21:23:56.456762 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k"] Nov 24 21:23:56 crc kubenswrapper[4801]: W1124 21:23:56.457440 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca4a00c_ea8a_438c_a2ac_77a3f04a3471.slice/crio-a07365548b8a91d2ffeaab1161cdf101e930d12388ede51c01963ccd20eec207 WatchSource:0}: Error finding container a07365548b8a91d2ffeaab1161cdf101e930d12388ede51c01963ccd20eec207: Status 404 returned error can't find the container with id a07365548b8a91d2ffeaab1161cdf101e930d12388ede51c01963ccd20eec207 Nov 24 21:23:56 crc kubenswrapper[4801]: I1124 21:23:56.809623 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" event={"ID":"8ca4a00c-ea8a-438c-a2ac-77a3f04a3471","Type":"ContainerStarted","Data":"a07365548b8a91d2ffeaab1161cdf101e930d12388ede51c01963ccd20eec207"} Nov 24 21:24:01 crc kubenswrapper[4801]: I1124 21:24:01.863635 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" event={"ID":"8ca4a00c-ea8a-438c-a2ac-77a3f04a3471","Type":"ContainerStarted","Data":"11d1ff0864c7b08e8521b19640fc3dbc8bffa738716c58a3e569ec195196f078"} Nov 24 21:24:01 crc kubenswrapper[4801]: I1124 21:24:01.866786 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:24:01 crc kubenswrapper[4801]: I1124 21:24:01.923450 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" podStartSLOduration=2.213262945 podStartE2EDuration="6.923410465s" podCreationTimestamp="2025-11-24 21:23:55 +0000 UTC" firstStartedPulling="2025-11-24 21:23:56.460352268 +0000 UTC m=+1008.542938938" lastFinishedPulling="2025-11-24 21:24:01.170499778 +0000 UTC m=+1013.253086458" observedRunningTime="2025-11-24 21:24:01.910979977 +0000 UTC m=+1013.993566687" watchObservedRunningTime="2025-11-24 21:24:01.923410465 +0000 UTC m=+1014.005997145" Nov 24 21:24:15 crc kubenswrapper[4801]: I1124 21:24:15.919827 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7c4c94676-k9g5k" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.050481 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.052623 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.054907 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jnvlk" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.066795 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.068439 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.071670 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fmxnt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.076653 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.088425 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.089977 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.094831 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-k5fs4" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.094986 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.118447 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.124211 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.126003 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.130543 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-594ml" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.146300 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.148253 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.156803 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-n7r9v" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.171439 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgml\" (UniqueName: \"kubernetes.io/projected/c652c759-8522-4638-a5e5-dcdcb965fa66-kube-api-access-hzgml\") pod \"cinder-operator-controller-manager-79856dc55c-8bd5p\" (UID: \"c652c759-8522-4638-a5e5-dcdcb965fa66\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.171493 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gtp\" (UniqueName: \"kubernetes.io/projected/a4c2438a-8323-4042-a1c2-2db0fb3fd096-kube-api-access-q9gtp\") pod \"barbican-operator-controller-manager-86dc4d89c8-nf9rx\" (UID: \"a4c2438a-8323-4042-a1c2-2db0fb3fd096\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.171537 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxsh\" (UniqueName: \"kubernetes.io/projected/57e84789-6801-4b26-9014-6735327f3559-kube-api-access-jgxsh\") pod \"glance-operator-controller-manager-68b95954c9-q4x69\" (UID: \"57e84789-6801-4b26-9014-6735327f3559\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.171645 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwm7\" (UniqueName: \"kubernetes.io/projected/e0f6e49b-c86a-4d0f-b5fd-7c28c0859544-kube-api-access-htwm7\") pod \"designate-operator-controller-manager-7d695c9b56-zvrq5\" (UID: \"e0f6e49b-c86a-4d0f-b5fd-7c28c0859544\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.175405 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.190466 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.218272 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.219985 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.223063 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-r5f7p" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.277669 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.283929 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tz9t\" (UniqueName: \"kubernetes.io/projected/fab8c86f-5335-48e2-8272-fd4c04a1f28c-kube-api-access-7tz9t\") pod \"heat-operator-controller-manager-774b86978c-mvmfd\" (UID: \"fab8c86f-5335-48e2-8272-fd4c04a1f28c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.284023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzcm\" (UniqueName: \"kubernetes.io/projected/aa9b27bf-234e-4116-8adf-68094684f237-kube-api-access-4fzcm\") pod \"horizon-operator-controller-manager-68c9694994-xw4kb\" (UID: \"aa9b27bf-234e-4116-8adf-68094684f237\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.284142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzgml\" (UniqueName: \"kubernetes.io/projected/c652c759-8522-4638-a5e5-dcdcb965fa66-kube-api-access-hzgml\") pod \"cinder-operator-controller-manager-79856dc55c-8bd5p\" (UID: \"c652c759-8522-4638-a5e5-dcdcb965fa66\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.284172 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gtp\" (UniqueName: \"kubernetes.io/projected/a4c2438a-8323-4042-a1c2-2db0fb3fd096-kube-api-access-q9gtp\") pod \"barbican-operator-controller-manager-86dc4d89c8-nf9rx\" (UID: \"a4c2438a-8323-4042-a1c2-2db0fb3fd096\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.284204 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxsh\" (UniqueName: \"kubernetes.io/projected/57e84789-6801-4b26-9014-6735327f3559-kube-api-access-jgxsh\") pod \"glance-operator-controller-manager-68b95954c9-q4x69\" (UID: \"57e84789-6801-4b26-9014-6735327f3559\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.284279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwm7\" (UniqueName: \"kubernetes.io/projected/e0f6e49b-c86a-4d0f-b5fd-7c28c0859544-kube-api-access-htwm7\") pod \"designate-operator-controller-manager-7d695c9b56-zvrq5\" (UID: \"e0f6e49b-c86a-4d0f-b5fd-7c28c0859544\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.287409 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.315996 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.334119 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gtp\" (UniqueName: \"kubernetes.io/projected/a4c2438a-8323-4042-a1c2-2db0fb3fd096-kube-api-access-q9gtp\") pod \"barbican-operator-controller-manager-86dc4d89c8-nf9rx\" (UID: \"a4c2438a-8323-4042-a1c2-2db0fb3fd096\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.337737 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-h5dzt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.340512 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.342325 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.343736 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.349912 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwm7\" (UniqueName: \"kubernetes.io/projected/e0f6e49b-c86a-4d0f-b5fd-7c28c0859544-kube-api-access-htwm7\") pod \"designate-operator-controller-manager-7d695c9b56-zvrq5\" (UID: \"e0f6e49b-c86a-4d0f-b5fd-7c28c0859544\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.350011 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.351918 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzgml\" (UniqueName: \"kubernetes.io/projected/c652c759-8522-4638-a5e5-dcdcb965fa66-kube-api-access-hzgml\") pod \"cinder-operator-controller-manager-79856dc55c-8bd5p\" (UID: \"c652c759-8522-4638-a5e5-dcdcb965fa66\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.352270 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5l7h7" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.352358 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxsh\" (UniqueName: \"kubernetes.io/projected/57e84789-6801-4b26-9014-6735327f3559-kube-api-access-jgxsh\") pod \"glance-operator-controller-manager-68b95954c9-q4x69\" (UID: \"57e84789-6801-4b26-9014-6735327f3559\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.375038 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.377599 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.377710 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.385489 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9wr\" (UniqueName: \"kubernetes.io/projected/776dffbf-70bd-40b3-a88d-241ca0870179-kube-api-access-rs9wr\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.387799 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tz9t\" (UniqueName: \"kubernetes.io/projected/fab8c86f-5335-48e2-8272-fd4c04a1f28c-kube-api-access-7tz9t\") pod \"heat-operator-controller-manager-774b86978c-mvmfd\" (UID: \"fab8c86f-5335-48e2-8272-fd4c04a1f28c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.387953 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fzcm\" (UniqueName: \"kubernetes.io/projected/aa9b27bf-234e-4116-8adf-68094684f237-kube-api-access-4fzcm\") pod \"horizon-operator-controller-manager-68c9694994-xw4kb\" (UID: \"aa9b27bf-234e-4116-8adf-68094684f237\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.388100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.388425 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqhn\" (UniqueName: \"kubernetes.io/projected/a5c61999-f3db-4d45-bb33-8b25d09cb675-kube-api-access-6zqhn\") pod \"ironic-operator-controller-manager-5bfcdc958c-884lt\" (UID: \"a5c61999-f3db-4d45-bb33-8b25d09cb675\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.389313 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cmnpj" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.393589 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.396100 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.417692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fzcm\" (UniqueName: \"kubernetes.io/projected/aa9b27bf-234e-4116-8adf-68094684f237-kube-api-access-4fzcm\") pod \"horizon-operator-controller-manager-68c9694994-xw4kb\" (UID: \"aa9b27bf-234e-4116-8adf-68094684f237\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.418688 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.419717 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.423869 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tz9t\" (UniqueName: \"kubernetes.io/projected/fab8c86f-5335-48e2-8272-fd4c04a1f28c-kube-api-access-7tz9t\") pod \"heat-operator-controller-manager-774b86978c-mvmfd\" (UID: \"fab8c86f-5335-48e2-8272-fd4c04a1f28c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.438135 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.440336 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.448091 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cgkk6" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.460322 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.470765 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.472626 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.482149 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5jx2z" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.483060 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.500630 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.500992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqhn\" (UniqueName: \"kubernetes.io/projected/a5c61999-f3db-4d45-bb33-8b25d09cb675-kube-api-access-6zqhn\") pod \"ironic-operator-controller-manager-5bfcdc958c-884lt\" (UID: \"a5c61999-f3db-4d45-bb33-8b25d09cb675\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.501092 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jzh7\" (UniqueName: \"kubernetes.io/projected/fa7dcf85-ac60-4a43-beef-c92e1a597e4b-kube-api-access-7jzh7\") pod \"keystone-operator-controller-manager-748dc6576f-mzknb\" (UID: \"fa7dcf85-ac60-4a43-beef-c92e1a597e4b\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.501192 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs9wr\" (UniqueName: \"kubernetes.io/projected/776dffbf-70bd-40b3-a88d-241ca0870179-kube-api-access-rs9wr\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.501276 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjdt\" (UniqueName: \"kubernetes.io/projected/6072847d-a06e-4642-a120-d89098e76619-kube-api-access-xpjdt\") pod \"manila-operator-controller-manager-58bb8d67cc-kdtqx\" (UID: \"6072847d-a06e-4642-a120-d89098e76619\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:24:33 crc kubenswrapper[4801]: E1124 21:24:33.500846 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:24:33 crc kubenswrapper[4801]: E1124 21:24:33.501565 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert podName:776dffbf-70bd-40b3-a88d-241ca0870179 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:34.001541429 +0000 UTC m=+1046.084128099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert") pod "infra-operator-controller-manager-d5cc86f4b-w8j9x" (UID: "776dffbf-70bd-40b3-a88d-241ca0870179") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.503426 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.518443 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.520312 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.538397 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-scxlf" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.549437 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.549890 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqhn\" (UniqueName: \"kubernetes.io/projected/a5c61999-f3db-4d45-bb33-8b25d09cb675-kube-api-access-6zqhn\") pod \"ironic-operator-controller-manager-5bfcdc958c-884lt\" (UID: \"a5c61999-f3db-4d45-bb33-8b25d09cb675\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.561683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs9wr\" (UniqueName: \"kubernetes.io/projected/776dffbf-70bd-40b3-a88d-241ca0870179-kube-api-access-rs9wr\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.578449 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.580236 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.583321 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wfnxf" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.610414 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jzh7\" (UniqueName: \"kubernetes.io/projected/fa7dcf85-ac60-4a43-beef-c92e1a597e4b-kube-api-access-7jzh7\") pod \"keystone-operator-controller-manager-748dc6576f-mzknb\" (UID: \"fa7dcf85-ac60-4a43-beef-c92e1a597e4b\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.610680 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjdt\" (UniqueName: \"kubernetes.io/projected/6072847d-a06e-4642-a120-d89098e76619-kube-api-access-xpjdt\") pod \"manila-operator-controller-manager-58bb8d67cc-kdtqx\" (UID: \"6072847d-a06e-4642-a120-d89098e76619\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.610868 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/16518a90-b12a-402b-982b-7649945e5d7b-kube-api-access-hdwwn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-jmtpv\" (UID: \"16518a90-b12a-402b-982b-7649945e5d7b\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.610955 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wlq\" (UniqueName: \"kubernetes.io/projected/57efd675-3fd7-4f61-bff4-f47645a37c1d-kube-api-access-h8wlq\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-97z5m\" (UID: \"57efd675-3fd7-4f61-bff4-f47645a37c1d\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.625052 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.633756 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.636622 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.650411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.661018 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jzh7\" (UniqueName: \"kubernetes.io/projected/fa7dcf85-ac60-4a43-beef-c92e1a597e4b-kube-api-access-7jzh7\") pod \"keystone-operator-controller-manager-748dc6576f-mzknb\" (UID: \"fa7dcf85-ac60-4a43-beef-c92e1a597e4b\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.661040 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjdt\" (UniqueName: \"kubernetes.io/projected/6072847d-a06e-4642-a120-d89098e76619-kube-api-access-xpjdt\") pod \"manila-operator-controller-manager-58bb8d67cc-kdtqx\" (UID: \"6072847d-a06e-4642-a120-d89098e76619\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.661466 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-k7d65" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.671668 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.675472 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.713294 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.715597 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7pz9\" (UniqueName: \"kubernetes.io/projected/94a2b03f-55b0-4cae-b8f9-53babac8e9e4-kube-api-access-r7pz9\") pod \"octavia-operator-controller-manager-fd75fd47d-jpl9z\" (UID: \"94a2b03f-55b0-4cae-b8f9-53babac8e9e4\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.716021 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/16518a90-b12a-402b-982b-7649945e5d7b-kube-api-access-hdwwn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-jmtpv\" (UID: \"16518a90-b12a-402b-982b-7649945e5d7b\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.716148 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8wlq\" (UniqueName: \"kubernetes.io/projected/57efd675-3fd7-4f61-bff4-f47645a37c1d-kube-api-access-h8wlq\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-97z5m\" (UID: \"57efd675-3fd7-4f61-bff4-f47645a37c1d\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.716350 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmkp\" (UniqueName: \"kubernetes.io/projected/9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433-kube-api-access-cxmkp\") pod \"nova-operator-controller-manager-79556f57fc-z89h9\" (UID: \"9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.728903 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.755103 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.761820 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/16518a90-b12a-402b-982b-7649945e5d7b-kube-api-access-hdwwn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-jmtpv\" (UID: \"16518a90-b12a-402b-982b-7649945e5d7b\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.770225 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.771682 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8wlq\" (UniqueName: \"kubernetes.io/projected/57efd675-3fd7-4f61-bff4-f47645a37c1d-kube-api-access-h8wlq\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-97z5m\" (UID: \"57efd675-3fd7-4f61-bff4-f47645a37c1d\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.773441 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.775791 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.795004 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.795447 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-z67qz" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.821005 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.822841 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.837965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7pz9\" (UniqueName: \"kubernetes.io/projected/94a2b03f-55b0-4cae-b8f9-53babac8e9e4-kube-api-access-r7pz9\") pod \"octavia-operator-controller-manager-fd75fd47d-jpl9z\" (UID: \"94a2b03f-55b0-4cae-b8f9-53babac8e9e4\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.839075 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxmkp\" (UniqueName: \"kubernetes.io/projected/9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433-kube-api-access-cxmkp\") pod \"nova-operator-controller-manager-79556f57fc-z89h9\" (UID: \"9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.839186 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.839214 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jst\" (UniqueName: \"kubernetes.io/projected/03fe9036-562f-47e8-94c6-c64f1e289895-kube-api-access-p7jst\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.853742 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7cxmf" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.853989 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.881848 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.884003 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.899130 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-c48nd" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.914011 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxmkp\" (UniqueName: \"kubernetes.io/projected/9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433-kube-api-access-cxmkp\") pod \"nova-operator-controller-manager-79556f57fc-z89h9\" (UID: \"9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.921474 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6"] Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.942831 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csw62\" (UniqueName: \"kubernetes.io/projected/b1de0d3c-119a-447e-aa94-63b0fcf992fa-kube-api-access-csw62\") pod \"placement-operator-controller-manager-5db546f9d9-n4wlr\" (UID: \"b1de0d3c-119a-447e-aa94-63b0fcf992fa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.943021 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.943045 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pdtd\" (UniqueName: \"kubernetes.io/projected/8eaba32d-1c83-4c67-8202-329ed133d882-kube-api-access-6pdtd\") pod \"ovn-operator-controller-manager-66cf5c67ff-7wgw6\" (UID: \"8eaba32d-1c83-4c67-8202-329ed133d882\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.943064 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jst\" (UniqueName: \"kubernetes.io/projected/03fe9036-562f-47e8-94c6-c64f1e289895-kube-api-access-p7jst\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:33 crc kubenswrapper[4801]: E1124 21:24:33.943577 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:33 crc kubenswrapper[4801]: E1124 21:24:33.943627 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert podName:03fe9036-562f-47e8-94c6-c64f1e289895 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:34.443610048 +0000 UTC m=+1046.526196718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" (UID: "03fe9036-562f-47e8-94c6-c64f1e289895") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.956732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7pz9\" (UniqueName: \"kubernetes.io/projected/94a2b03f-55b0-4cae-b8f9-53babac8e9e4-kube-api-access-r7pz9\") pod \"octavia-operator-controller-manager-fd75fd47d-jpl9z\" (UID: \"94a2b03f-55b0-4cae-b8f9-53babac8e9e4\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:24:33 crc kubenswrapper[4801]: I1124 21:24:33.983673 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:33.998011 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.008533 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.010411 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fb57f" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.023252 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jst\" (UniqueName: \"kubernetes.io/projected/03fe9036-562f-47e8-94c6-c64f1e289895-kube-api-access-p7jst\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.031457 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.044871 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csw62\" (UniqueName: \"kubernetes.io/projected/b1de0d3c-119a-447e-aa94-63b0fcf992fa-kube-api-access-csw62\") pod \"placement-operator-controller-manager-5db546f9d9-n4wlr\" (UID: \"b1de0d3c-119a-447e-aa94-63b0fcf992fa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.044952 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2rw\" (UniqueName: \"kubernetes.io/projected/c8b5803e-6b9e-430b-a809-bd51c7ce77c8-kube-api-access-4z2rw\") pod \"swift-operator-controller-manager-6fdc4fcf86-jhrrw\" (UID: \"c8b5803e-6b9e-430b-a809-bd51c7ce77c8\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.044998 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.045096 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pdtd\" (UniqueName: \"kubernetes.io/projected/8eaba32d-1c83-4c67-8202-329ed133d882-kube-api-access-6pdtd\") pod \"ovn-operator-controller-manager-66cf5c67ff-7wgw6\" (UID: \"8eaba32d-1c83-4c67-8202-329ed133d882\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.045778 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.045842 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert podName:776dffbf-70bd-40b3-a88d-241ca0870179 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:35.045827633 +0000 UTC m=+1047.128414303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert") pod "infra-operator-controller-manager-d5cc86f4b-w8j9x" (UID: "776dffbf-70bd-40b3-a88d-241ca0870179") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.050945 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.088216 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.090099 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.128268 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pdtd\" (UniqueName: \"kubernetes.io/projected/8eaba32d-1c83-4c67-8202-329ed133d882-kube-api-access-6pdtd\") pod \"ovn-operator-controller-manager-66cf5c67ff-7wgw6\" (UID: \"8eaba32d-1c83-4c67-8202-329ed133d882\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.147572 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-stgns" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.163971 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgclx\" (UniqueName: \"kubernetes.io/projected/f8658668-cf48-4465-b119-dcc386aea963-kube-api-access-xgclx\") pod \"test-operator-controller-manager-5cb74df96-wjm9k\" (UID: \"f8658668-cf48-4465-b119-dcc386aea963\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.164253 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2rw\" (UniqueName: \"kubernetes.io/projected/c8b5803e-6b9e-430b-a809-bd51c7ce77c8-kube-api-access-4z2rw\") pod \"swift-operator-controller-manager-6fdc4fcf86-jhrrw\" (UID: \"c8b5803e-6b9e-430b-a809-bd51c7ce77c8\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.221382 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.225320 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.237774 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wsdbt" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.251442 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csw62\" (UniqueName: \"kubernetes.io/projected/b1de0d3c-119a-447e-aa94-63b0fcf992fa-kube-api-access-csw62\") pod \"placement-operator-controller-manager-5db546f9d9-n4wlr\" (UID: \"b1de0d3c-119a-447e-aa94-63b0fcf992fa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.261137 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.265666 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2rw\" (UniqueName: \"kubernetes.io/projected/c8b5803e-6b9e-430b-a809-bd51c7ce77c8-kube-api-access-4z2rw\") pod \"swift-operator-controller-manager-6fdc4fcf86-jhrrw\" (UID: \"c8b5803e-6b9e-430b-a809-bd51c7ce77c8\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.277724 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.289891 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.290326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgclx\" (UniqueName: \"kubernetes.io/projected/f8658668-cf48-4465-b119-dcc386aea963-kube-api-access-xgclx\") pod \"test-operator-controller-manager-5cb74df96-wjm9k\" (UID: \"f8658668-cf48-4465-b119-dcc386aea963\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.290480 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkg5x\" (UniqueName: \"kubernetes.io/projected/82b04169-cd4a-4658-b875-88b342622816-kube-api-access-wkg5x\") pod \"telemetry-operator-controller-manager-5c7cd5746d-lcfhv\" (UID: \"82b04169-cd4a-4658-b875-88b342622816\") " pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.310839 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-4lmlj"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.320310 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.330487 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-4lmlj"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.341133 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgclx\" (UniqueName: \"kubernetes.io/projected/f8658668-cf48-4465-b119-dcc386aea963-kube-api-access-xgclx\") pod \"test-operator-controller-manager-5cb74df96-wjm9k\" (UID: \"f8658668-cf48-4465-b119-dcc386aea963\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.346450 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qlzd9" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.357209 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.361896 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.366772 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.368219 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qcswb" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.368440 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.381035 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.396243 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkg5x\" (UniqueName: \"kubernetes.io/projected/82b04169-cd4a-4658-b875-88b342622816-kube-api-access-wkg5x\") pod \"telemetry-operator-controller-manager-5c7cd5746d-lcfhv\" (UID: \"82b04169-cd4a-4658-b875-88b342622816\") " pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.396448 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff7c6\" (UniqueName: \"kubernetes.io/projected/0e5bc499-0e37-444d-9341-0f30dd8aaf4b-kube-api-access-ff7c6\") pod \"watcher-operator-controller-manager-864885998-4lmlj\" (UID: \"0e5bc499-0e37-444d-9341-0f30dd8aaf4b\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.432244 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkg5x\" (UniqueName: \"kubernetes.io/projected/82b04169-cd4a-4658-b875-88b342622816-kube-api-access-wkg5x\") pod \"telemetry-operator-controller-manager-5c7cd5746d-lcfhv\" (UID: \"82b04169-cd4a-4658-b875-88b342622816\") " pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.444271 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.455829 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.459986 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vkh88" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.469957 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.501824 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff7c6\" (UniqueName: \"kubernetes.io/projected/0e5bc499-0e37-444d-9341-0f30dd8aaf4b-kube-api-access-ff7c6\") pod \"watcher-operator-controller-manager-864885998-4lmlj\" (UID: \"0e5bc499-0e37-444d-9341-0f30dd8aaf4b\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.501952 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.501985 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.502101 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgtw4\" (UniqueName: \"kubernetes.io/projected/d878fee2-936b-4264-938e-3d7997ec2c7d-kube-api-access-qgtw4\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.502159 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.502195 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfkm\" (UniqueName: \"kubernetes.io/projected/45ac145c-4f11-43ca-81c3-7b56c357ce5d-kube-api-access-ckfkm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9skrc\" (UID: \"45ac145c-4f11-43ca-81c3-7b56c357ce5d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.502675 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.502829 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert podName:03fe9036-562f-47e8-94c6-c64f1e289895 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:35.502806128 +0000 UTC m=+1047.585392798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" (UID: "03fe9036-562f-47e8-94c6-c64f1e289895") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.533903 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff7c6\" (UniqueName: \"kubernetes.io/projected/0e5bc499-0e37-444d-9341-0f30dd8aaf4b-kube-api-access-ff7c6\") pod \"watcher-operator-controller-manager-864885998-4lmlj\" (UID: \"0e5bc499-0e37-444d-9341-0f30dd8aaf4b\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.538556 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.605794 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.606302 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.606456 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgtw4\" (UniqueName: \"kubernetes.io/projected/d878fee2-936b-4264-938e-3d7997ec2c7d-kube-api-access-qgtw4\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.606515 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfkm\" (UniqueName: \"kubernetes.io/projected/45ac145c-4f11-43ca-81c3-7b56c357ce5d-kube-api-access-ckfkm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9skrc\" (UID: \"45ac145c-4f11-43ca-81c3-7b56c357ce5d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.607062 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.607126 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs podName:d878fee2-936b-4264-938e-3d7997ec2c7d nodeName:}" failed. No retries permitted until 2025-11-24 21:24:35.107108399 +0000 UTC m=+1047.189695059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs") pod "openstack-operator-controller-manager-5fcf4778d9-sfg5s" (UID: "d878fee2-936b-4264-938e-3d7997ec2c7d") : secret "metrics-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.607297 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: E1124 21:24:34.607341 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs podName:d878fee2-936b-4264-938e-3d7997ec2c7d nodeName:}" failed. No retries permitted until 2025-11-24 21:24:35.107333056 +0000 UTC m=+1047.189919726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs") pod "openstack-operator-controller-manager-5fcf4778d9-sfg5s" (UID: "d878fee2-936b-4264-938e-3d7997ec2c7d") : secret "webhook-server-cert" not found Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.640692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckfkm\" (UniqueName: \"kubernetes.io/projected/45ac145c-4f11-43ca-81c3-7b56c357ce5d-kube-api-access-ckfkm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9skrc\" (UID: \"45ac145c-4f11-43ca-81c3-7b56c357ce5d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.646415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgtw4\" (UniqueName: \"kubernetes.io/projected/d878fee2-936b-4264-938e-3d7997ec2c7d-kube-api-access-qgtw4\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.678479 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.714236 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.764317 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.797576 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p"] Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.923975 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:24:34 crc kubenswrapper[4801]: I1124 21:24:34.975097 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.012959 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.074426 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.130552 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.130605 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.130634 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.132127 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.132230 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert podName:776dffbf-70bd-40b3-a88d-241ca0870179 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:37.132203513 +0000 UTC m=+1049.214790183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert") pod "infra-operator-controller-manager-d5cc86f4b-w8j9x" (UID: "776dffbf-70bd-40b3-a88d-241ca0870179") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.132571 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.132623 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs podName:d878fee2-936b-4264-938e-3d7997ec2c7d nodeName:}" failed. No retries permitted until 2025-11-24 21:24:36.132605236 +0000 UTC m=+1048.215191906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs") pod "openstack-operator-controller-manager-5fcf4778d9-sfg5s" (UID: "d878fee2-936b-4264-938e-3d7997ec2c7d") : secret "webhook-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.132663 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.132680 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs podName:d878fee2-936b-4264-938e-3d7997ec2c7d nodeName:}" failed. No retries permitted until 2025-11-24 21:24:36.132674588 +0000 UTC m=+1048.215261258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs") pod "openstack-operator-controller-manager-5fcf4778d9-sfg5s" (UID: "d878fee2-936b-4264-938e-3d7997ec2c7d") : secret "metrics-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.295269 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" event={"ID":"c652c759-8522-4638-a5e5-dcdcb965fa66","Type":"ContainerStarted","Data":"f141fc27925771d2e3bb801dc308bbf67af424bc1fde9954171409bc54f7e7de"} Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.540318 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.541045 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: E1124 21:24:35.541119 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert podName:03fe9036-562f-47e8-94c6-c64f1e289895 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:37.541093205 +0000 UTC m=+1049.623679885 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" (UID: "03fe9036-562f-47e8-94c6-c64f1e289895") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.545553 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx"] Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.558785 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m"] Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.562521 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69"] Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.596007 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5"] Nov 24 21:24:35 crc kubenswrapper[4801]: W1124 21:24:35.596308 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f6e49b_c86a_4d0f_b5fd_7c28c0859544.slice/crio-533ac876aba8ac52778441feba944de63bed4b146e9ea63b3fa507394376c8e0 WatchSource:0}: Error finding container 533ac876aba8ac52778441feba944de63bed4b146e9ea63b3fa507394376c8e0: Status 404 returned error can't find the container with id 533ac876aba8ac52778441feba944de63bed4b146e9ea63b3fa507394376c8e0 Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.829267 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd"] Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.840535 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt"] Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.870524 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb"] Nov 24 21:24:35 crc kubenswrapper[4801]: W1124 21:24:35.884658 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa7dcf85_ac60_4a43_beef_c92e1a597e4b.slice/crio-e8ac151197a442054937428fa070a5a48f20e28aa75a16aca1d7b6c0e1d56ee2 WatchSource:0}: Error finding container e8ac151197a442054937428fa070a5a48f20e28aa75a16aca1d7b6c0e1d56ee2: Status 404 returned error can't find the container with id e8ac151197a442054937428fa070a5a48f20e28aa75a16aca1d7b6c0e1d56ee2 Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.892795 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb"] Nov 24 21:24:35 crc kubenswrapper[4801]: W1124 21:24:35.903806 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16518a90_b12a_402b_982b_7649945e5d7b.slice/crio-3862e768d56b231df030d17e4071eff564ca91dd94828d20a32460ba2d9ae2de WatchSource:0}: Error finding container 3862e768d56b231df030d17e4071eff564ca91dd94828d20a32460ba2d9ae2de: Status 404 returned error can't find the container with id 3862e768d56b231df030d17e4071eff564ca91dd94828d20a32460ba2d9ae2de Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.931972 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv"] Nov 24 21:24:35 crc kubenswrapper[4801]: I1124 21:24:35.967608 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.165672 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.165738 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.166007 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.166079 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs podName:d878fee2-936b-4264-938e-3d7997ec2c7d nodeName:}" failed. No retries permitted until 2025-11-24 21:24:38.166055792 +0000 UTC m=+1050.248642462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs") pod "openstack-operator-controller-manager-5fcf4778d9-sfg5s" (UID: "d878fee2-936b-4264-938e-3d7997ec2c7d") : secret "webhook-server-cert" not found Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.181035 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-metrics-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.309294 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" event={"ID":"fa7dcf85-ac60-4a43-beef-c92e1a597e4b","Type":"ContainerStarted","Data":"e8ac151197a442054937428fa070a5a48f20e28aa75a16aca1d7b6c0e1d56ee2"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.310076 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" event={"ID":"fab8c86f-5335-48e2-8272-fd4c04a1f28c","Type":"ContainerStarted","Data":"59f338fab92bd38843799668f810cb835fd9ce7ef8bf8dc716fff26556672d1c"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.316810 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" event={"ID":"16518a90-b12a-402b-982b-7649945e5d7b","Type":"ContainerStarted","Data":"3862e768d56b231df030d17e4071eff564ca91dd94828d20a32460ba2d9ae2de"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.318216 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" event={"ID":"aa9b27bf-234e-4116-8adf-68094684f237","Type":"ContainerStarted","Data":"65073293588fc38ff9e6dec1005ef7af57dbbedf6f9eef599d9532ec6ee92b23"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.319817 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" event={"ID":"57e84789-6801-4b26-9014-6735327f3559","Type":"ContainerStarted","Data":"1272dbe0140c58102aee2979b0a183a59db1eb1bd36df064f114bdf242baa853"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.320959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" event={"ID":"a5c61999-f3db-4d45-bb33-8b25d09cb675","Type":"ContainerStarted","Data":"df554868743e72ed4c0f11a8b55129f31e5cf62a3ef17e310b59bf5c01aab999"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.322211 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" event={"ID":"a4c2438a-8323-4042-a1c2-2db0fb3fd096","Type":"ContainerStarted","Data":"1a1c5dd8e08a1860dbdfeeea845660bb454986c12bc8295ed536b92e3859d03c"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.323711 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" event={"ID":"6072847d-a06e-4642-a120-d89098e76619","Type":"ContainerStarted","Data":"65247a77f3ebea781bc4a41de15b19c597f31c64fa187b7952ee62e81c7f0e54"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.326573 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" event={"ID":"57efd675-3fd7-4f61-bff4-f47645a37c1d","Type":"ContainerStarted","Data":"82da1bf9850af1220adf1f255d51e51199a761b8b3187202a8284aa7ee197c2d"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.328930 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" event={"ID":"e0f6e49b-c86a-4d0f-b5fd-7c28c0859544","Type":"ContainerStarted","Data":"533ac876aba8ac52778441feba944de63bed4b146e9ea63b3fa507394376c8e0"} Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.432961 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.440094 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.446769 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.475555 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.531119 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.544610 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-4lmlj"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.564629 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.579426 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z"] Nov 24 21:24:36 crc kubenswrapper[4801]: I1124 21:24:36.605795 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc"] Nov 24 21:24:36 crc kubenswrapper[4801]: W1124 21:24:36.635462 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1de0d3c_119a_447e_aa94_63b0fcf992fa.slice/crio-95d215cd6c967e9fd06bcfe0d4c10542e85c556f97525903dc9f249d58b51b0a WatchSource:0}: Error finding container 95d215cd6c967e9fd06bcfe0d4c10542e85c556f97525903dc9f249d58b51b0a: Status 404 returned error can't find the container with id 95d215cd6c967e9fd06bcfe0d4c10542e85c556f97525903dc9f249d58b51b0a Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.646346 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.143:5001/openstack-k8s-operators/telemetry-operator:63d66b0ed1eb239a0fed716be9146a482aff93a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wkg5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5c7cd5746d-lcfhv_openstack-operators(82b04169-cd4a-4658-b875-88b342622816): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.652554 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csw62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-n4wlr_openstack-operators(b1de0d3c-119a-447e-aa94-63b0fcf992fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.652695 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckfkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9skrc_openstack-operators(45ac145c-4f11-43ca-81c3-7b56c357ce5d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.652795 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wkg5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5c7cd5746d-lcfhv_openstack-operators(82b04169-cd4a-4658-b875-88b342622816): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.654561 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" podUID="82b04169-cd4a-4658-b875-88b342622816" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.654946 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" podUID="45ac145c-4f11-43ca-81c3-7b56c357ce5d" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.659774 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csw62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-n4wlr_openstack-operators(b1de0d3c-119a-447e-aa94-63b0fcf992fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.670959 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" podUID="b1de0d3c-119a-447e-aa94-63b0fcf992fa" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.711595 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-jpl9z_openstack-operators(94a2b03f-55b0-4cae-b8f9-53babac8e9e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.718854 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-jpl9z_openstack-operators(94a2b03f-55b0-4cae-b8f9-53babac8e9e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:24:36 crc kubenswrapper[4801]: E1124 21:24:36.720203 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" podUID="94a2b03f-55b0-4cae-b8f9-53babac8e9e4" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.189321 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.196090 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/776dffbf-70bd-40b3-a88d-241ca0870179-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-w8j9x\" (UID: \"776dffbf-70bd-40b3-a88d-241ca0870179\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.347498 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" event={"ID":"8eaba32d-1c83-4c67-8202-329ed133d882","Type":"ContainerStarted","Data":"f1a78d346c41929bdfb6002acebe20f11ff3a0148d8f7c0595c2af7378325672"} Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.352811 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" event={"ID":"b1de0d3c-119a-447e-aa94-63b0fcf992fa","Type":"ContainerStarted","Data":"95d215cd6c967e9fd06bcfe0d4c10542e85c556f97525903dc9f249d58b51b0a"} Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.355693 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" event={"ID":"82b04169-cd4a-4658-b875-88b342622816","Type":"ContainerStarted","Data":"2dacbbc4aad1fc7983e75c0d22352e2c35b924d1785290902e662c879f77a1b1"} Nov 24 21:24:37 crc kubenswrapper[4801]: E1124 21:24:37.360150 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" podUID="b1de0d3c-119a-447e-aa94-63b0fcf992fa" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.363152 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" event={"ID":"45ac145c-4f11-43ca-81c3-7b56c357ce5d","Type":"ContainerStarted","Data":"3d5fb4420f19a8cddcf1309c351a9714104e9e9c53a7bd36f81ede597d290b0d"} Nov 24 21:24:37 crc kubenswrapper[4801]: E1124 21:24:37.364872 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.143:5001/openstack-k8s-operators/telemetry-operator:63d66b0ed1eb239a0fed716be9146a482aff93a4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" podUID="82b04169-cd4a-4658-b875-88b342622816" Nov 24 21:24:37 crc kubenswrapper[4801]: E1124 21:24:37.364949 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" podUID="45ac145c-4f11-43ca-81c3-7b56c357ce5d" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.365768 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" event={"ID":"9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433","Type":"ContainerStarted","Data":"14fd269c446571b3b4073660fd0b541a26669e192b6aada42d1329e117318edd"} Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.367569 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" event={"ID":"c8b5803e-6b9e-430b-a809-bd51c7ce77c8","Type":"ContainerStarted","Data":"f6395d2ddae4263b39da314734ca9eae54a13794e9b0a13fffefcb01cccfb6ca"} Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.370513 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" event={"ID":"94a2b03f-55b0-4cae-b8f9-53babac8e9e4","Type":"ContainerStarted","Data":"fb0621f3ea85b6aad8e83f6b149c8ab8a3c7d246c7a14d947c5b2e3c2f181dd2"} Nov 24 21:24:37 crc kubenswrapper[4801]: E1124 21:24:37.386520 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" podUID="94a2b03f-55b0-4cae-b8f9-53babac8e9e4" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.386913 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" event={"ID":"f8658668-cf48-4465-b119-dcc386aea963","Type":"ContainerStarted","Data":"de2be1f83148a8265b3d189570563c62fcff3ac323a103c0bfe6997a514a71c6"} Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.394402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" event={"ID":"0e5bc499-0e37-444d-9341-0f30dd8aaf4b","Type":"ContainerStarted","Data":"7fa4253eba086f53be132c05a0fe16682eb6e5d15d4dd4fc506aed875d0c4aea"} Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.483942 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:24:37 crc kubenswrapper[4801]: I1124 21:24:37.598680 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:37 crc kubenswrapper[4801]: E1124 21:24:37.598907 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:37 crc kubenswrapper[4801]: E1124 21:24:37.599225 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert podName:03fe9036-562f-47e8-94c6-c64f1e289895 nodeName:}" failed. No retries permitted until 2025-11-24 21:24:41.599204503 +0000 UTC m=+1053.681791163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" (UID: "03fe9036-562f-47e8-94c6-c64f1e289895") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:24:38 crc kubenswrapper[4801]: I1124 21:24:38.182045 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x"] Nov 24 21:24:38 crc kubenswrapper[4801]: I1124 21:24:38.213910 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:38 crc kubenswrapper[4801]: I1124 21:24:38.224315 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d878fee2-936b-4264-938e-3d7997ec2c7d-webhook-certs\") pod \"openstack-operator-controller-manager-5fcf4778d9-sfg5s\" (UID: \"d878fee2-936b-4264-938e-3d7997ec2c7d\") " pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:38 crc kubenswrapper[4801]: W1124 21:24:38.252653 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod776dffbf_70bd_40b3_a88d_241ca0870179.slice/crio-4427b701999962bcb259db395661268950c7179b08b38b9e729b7cd159bf3881 WatchSource:0}: Error finding container 4427b701999962bcb259db395661268950c7179b08b38b9e729b7cd159bf3881: Status 404 returned error can't find the container with id 4427b701999962bcb259db395661268950c7179b08b38b9e729b7cd159bf3881 Nov 24 21:24:38 crc kubenswrapper[4801]: I1124 21:24:38.346299 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:24:38 crc kubenswrapper[4801]: I1124 21:24:38.437847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" event={"ID":"776dffbf-70bd-40b3-a88d-241ca0870179","Type":"ContainerStarted","Data":"4427b701999962bcb259db395661268950c7179b08b38b9e729b7cd159bf3881"} Nov 24 21:24:38 crc kubenswrapper[4801]: E1124 21:24:38.443868 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" podUID="45ac145c-4f11-43ca-81c3-7b56c357ce5d" Nov 24 21:24:38 crc kubenswrapper[4801]: E1124 21:24:38.443951 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.143:5001/openstack-k8s-operators/telemetry-operator:63d66b0ed1eb239a0fed716be9146a482aff93a4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" podUID="82b04169-cd4a-4658-b875-88b342622816" Nov 24 21:24:38 crc kubenswrapper[4801]: E1124 21:24:38.444460 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" podUID="b1de0d3c-119a-447e-aa94-63b0fcf992fa" Nov 24 21:24:38 crc kubenswrapper[4801]: E1124 21:24:38.451217 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" podUID="94a2b03f-55b0-4cae-b8f9-53babac8e9e4" Nov 24 21:24:39 crc kubenswrapper[4801]: I1124 21:24:39.194017 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s"] Nov 24 21:24:41 crc kubenswrapper[4801]: I1124 21:24:41.610194 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:41 crc kubenswrapper[4801]: I1124 21:24:41.624613 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03fe9036-562f-47e8-94c6-c64f1e289895-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-84qdp\" (UID: \"03fe9036-562f-47e8-94c6-c64f1e289895\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:41 crc kubenswrapper[4801]: I1124 21:24:41.838447 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:24:42 crc kubenswrapper[4801]: I1124 21:24:42.507800 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" event={"ID":"d878fee2-936b-4264-938e-3d7997ec2c7d","Type":"ContainerStarted","Data":"9704e46a3d9479f3411fb1e60ce1e08d7ad7218074f7e3bad77f2a024894db29"} Nov 24 21:24:49 crc kubenswrapper[4801]: E1124 21:24:49.763838 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 24 21:24:49 crc kubenswrapper[4801]: E1124 21:24:49.765102 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htwm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-7d695c9b56-zvrq5_openstack-operators(e0f6e49b-c86a-4d0f-b5fd-7c28c0859544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:50 crc kubenswrapper[4801]: E1124 21:24:50.264121 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 24 21:24:50 crc kubenswrapper[4801]: E1124 21:24:50.264431 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jzh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-mzknb_openstack-operators(fa7dcf85-ac60-4a43-beef-c92e1a597e4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:50 crc kubenswrapper[4801]: E1124 21:24:50.830588 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 24 21:24:50 crc kubenswrapper[4801]: E1124 21:24:50.831703 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpjdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-kdtqx_openstack-operators(6072847d-a06e-4642-a120-d89098e76619): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:51 crc kubenswrapper[4801]: E1124 21:24:51.342725 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7" Nov 24 21:24:51 crc kubenswrapper[4801]: E1124 21:24:51.343093 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9gtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-86dc4d89c8-nf9rx_openstack-operators(a4c2438a-8323-4042-a1c2-2db0fb3fd096): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:52 crc kubenswrapper[4801]: E1124 21:24:52.905218 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 24 21:24:52 crc kubenswrapper[4801]: E1124 21:24:52.905593 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-7wgw6_openstack-operators(8eaba32d-1c83-4c67-8202-329ed133d882): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:53 crc kubenswrapper[4801]: E1124 21:24:53.484212 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 24 21:24:53 crc kubenswrapper[4801]: E1124 21:24:53.484525 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4z2rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-jhrrw_openstack-operators(c8b5803e-6b9e-430b-a809-bd51c7ce77c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:53 crc kubenswrapper[4801]: E1124 21:24:53.973431 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 24 21:24:53 crc kubenswrapper[4801]: E1124 21:24:53.973751 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxmkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-z89h9_openstack-operators(9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:54 crc kubenswrapper[4801]: I1124 21:24:54.320412 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:24:54 crc kubenswrapper[4801]: I1124 21:24:54.320498 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:24:54 crc kubenswrapper[4801]: E1124 21:24:54.378466 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 24 21:24:54 crc kubenswrapper[4801]: E1124 21:24:54.378732 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rs9wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-w8j9x_openstack-operators(776dffbf-70bd-40b3-a88d-241ca0870179): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:24:55 crc kubenswrapper[4801]: E1124 21:24:55.023259 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 24 21:24:55 crc kubenswrapper[4801]: E1124 21:24:55.023792 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ff7c6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-4lmlj_openstack-operators(0e5bc499-0e37-444d-9341-0f30dd8aaf4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:25:02 crc kubenswrapper[4801]: I1124 21:25:02.723397 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" event={"ID":"d878fee2-936b-4264-938e-3d7997ec2c7d","Type":"ContainerStarted","Data":"263c1c986b2474d75634664b72f4ada1009996c5e7a8bf8cba4e49db2e1b59ff"} Nov 24 21:25:02 crc kubenswrapper[4801]: I1124 21:25:02.725277 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:25:03 crc kubenswrapper[4801]: I1124 21:25:03.095149 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" podStartSLOduration=30.095114643 podStartE2EDuration="30.095114643s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:25:02.768132592 +0000 UTC m=+1074.850719282" watchObservedRunningTime="2025-11-24 21:25:03.095114643 +0000 UTC m=+1075.177701313" Nov 24 21:25:03 crc kubenswrapper[4801]: I1124 21:25:03.101661 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp"] Nov 24 21:25:08 crc kubenswrapper[4801]: E1124 21:25:08.082500 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 24 21:25:08 crc kubenswrapper[4801]: E1124 21:25:08.084872 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckfkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9skrc_openstack-operators(45ac145c-4f11-43ca-81c3-7b56c357ce5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:25:08 crc kubenswrapper[4801]: E1124 21:25:08.086247 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" podUID="45ac145c-4f11-43ca-81c3-7b56c357ce5d" Nov 24 21:25:08 crc kubenswrapper[4801]: W1124 21:25:08.175282 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03fe9036_562f_47e8_94c6_c64f1e289895.slice/crio-0fced65e8e4eb4763930da79489d0d900a0a5df5facd73eef9b041f9a2690672 WatchSource:0}: Error finding container 0fced65e8e4eb4763930da79489d0d900a0a5df5facd73eef9b041f9a2690672: Status 404 returned error can't find the container with id 0fced65e8e4eb4763930da79489d0d900a0a5df5facd73eef9b041f9a2690672 Nov 24 21:25:08 crc kubenswrapper[4801]: I1124 21:25:08.354972 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fcf4778d9-sfg5s" Nov 24 21:25:08 crc kubenswrapper[4801]: I1124 21:25:08.790382 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" event={"ID":"a5c61999-f3db-4d45-bb33-8b25d09cb675","Type":"ContainerStarted","Data":"eeacbc75e206e6b144ccdef608c2a6361fac330847af5576ba853cbf8314e2bb"} Nov 24 21:25:08 crc kubenswrapper[4801]: I1124 21:25:08.791550 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" event={"ID":"c652c759-8522-4638-a5e5-dcdcb965fa66","Type":"ContainerStarted","Data":"3f35b44ab5c034c3aeb7a10e8823a4cda7f905f8e0d54113fd716a17786de1e0"} Nov 24 21:25:08 crc kubenswrapper[4801]: I1124 21:25:08.792291 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" event={"ID":"03fe9036-562f-47e8-94c6-c64f1e289895","Type":"ContainerStarted","Data":"0fced65e8e4eb4763930da79489d0d900a0a5df5facd73eef9b041f9a2690672"} Nov 24 21:25:09 crc kubenswrapper[4801]: I1124 21:25:09.808494 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" event={"ID":"57efd675-3fd7-4f61-bff4-f47645a37c1d","Type":"ContainerStarted","Data":"786e5a30b514e437ce6f0b8ba2a2b1b4171728a2f4d8b3b5232bc7612b601eae"} Nov 24 21:25:09 crc kubenswrapper[4801]: I1124 21:25:09.829032 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" event={"ID":"16518a90-b12a-402b-982b-7649945e5d7b","Type":"ContainerStarted","Data":"64df045157ff26b0a2e2c1804f6d285cea49f84733d62ce4a8f4b8d622eeef29"} Nov 24 21:25:09 crc kubenswrapper[4801]: I1124 21:25:09.838141 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" event={"ID":"aa9b27bf-234e-4116-8adf-68094684f237","Type":"ContainerStarted","Data":"a4b2dfd9ae444a77c876092e4ea837d3af30a221ea056f1543d4a931dec5074d"} Nov 24 21:25:09 crc kubenswrapper[4801]: I1124 21:25:09.842938 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" event={"ID":"f8658668-cf48-4465-b119-dcc386aea963","Type":"ContainerStarted","Data":"470c450275f96e5dc07689ef322943a529e2f2d31d0265f5ef0533514536928e"} Nov 24 21:25:09 crc kubenswrapper[4801]: I1124 21:25:09.850531 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" event={"ID":"57e84789-6801-4b26-9014-6735327f3559","Type":"ContainerStarted","Data":"8b9c920a4dca5219ef8de9b0850da929cb0fc41a09ef8cef798a316d3c20b596"} Nov 24 21:25:09 crc kubenswrapper[4801]: I1124 21:25:09.859201 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" event={"ID":"fab8c86f-5335-48e2-8272-fd4c04a1f28c","Type":"ContainerStarted","Data":"9bb521411528247205e6c8f9adf57b1cd1b247456ad9e106688733d4eaac36a6"} Nov 24 21:25:10 crc kubenswrapper[4801]: E1124 21:25:10.461812 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 24 21:25:10 crc kubenswrapper[4801]: E1124 21:25:10.462251 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jzh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-mzknb_openstack-operators(fa7dcf85-ac60-4a43-beef-c92e1a597e4b): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2\": context canceled" logger="UnhandledError" Nov 24 21:25:10 crc kubenswrapper[4801]: E1124 21:25:10.463489 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2\\\": context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" podUID="fa7dcf85-ac60-4a43-beef-c92e1a597e4b" Nov 24 21:25:10 crc kubenswrapper[4801]: E1124 21:25:10.558778 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 24 21:25:10 crc kubenswrapper[4801]: E1124 21:25:10.558968 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pdtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-7wgw6_openstack-operators(8eaba32d-1c83-4c67-8202-329ed133d882): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd\": context canceled" logger="UnhandledError" Nov 24 21:25:10 crc kubenswrapper[4801]: E1124 21:25:10.560190 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd\\\": context canceled\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" podUID="8eaba32d-1c83-4c67-8202-329ed133d882" Nov 24 21:25:12 crc kubenswrapper[4801]: I1124 21:25:12.885579 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" event={"ID":"94a2b03f-55b0-4cae-b8f9-53babac8e9e4","Type":"ContainerStarted","Data":"631006aedca0b9524aec4f422456385c9fcf559dec0a78f8e8d7e0de44dc1c80"} Nov 24 21:25:13 crc kubenswrapper[4801]: I1124 21:25:13.903553 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" event={"ID":"b1de0d3c-119a-447e-aa94-63b0fcf992fa","Type":"ContainerStarted","Data":"783f7f50a986f411a72b19c8177b6bccba413060f184587b609e831816fe91cd"} Nov 24 21:25:13 crc kubenswrapper[4801]: I1124 21:25:13.907592 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" event={"ID":"82b04169-cd4a-4658-b875-88b342622816","Type":"ContainerStarted","Data":"1ec9cf1c31bd7a29b4f327b9cc242ceb9835b248a84351e4f66977a72fc1430e"} Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.448568 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" podUID="776dffbf-70bd-40b3-a88d-241ca0870179" Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.472632 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" podUID="0e5bc499-0e37-444d-9341-0f30dd8aaf4b" Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.606358 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" podUID="e0f6e49b-c86a-4d0f-b5fd-7c28c0859544" Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.719159 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" podUID="9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433" Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.782259 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" podUID="c8b5803e-6b9e-430b-a809-bd51c7ce77c8" Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.837801 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" podUID="6072847d-a06e-4642-a120-d89098e76619" Nov 24 21:25:14 crc kubenswrapper[4801]: I1124 21:25:14.926884 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" event={"ID":"b1de0d3c-119a-447e-aa94-63b0fcf992fa","Type":"ContainerStarted","Data":"1426ab29311886770d353ee87f98e637ac7bbe3fa39bf4d48882fb2e6dd7629a"} Nov 24 21:25:14 crc kubenswrapper[4801]: I1124 21:25:14.928359 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:25:14 crc kubenswrapper[4801]: I1124 21:25:14.944145 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" event={"ID":"c8b5803e-6b9e-430b-a809-bd51c7ce77c8","Type":"ContainerStarted","Data":"6a4324e0778cc29b99ef311fb24633114f91156e277a09f09bcdcd2603b6a05b"} Nov 24 21:25:14 crc kubenswrapper[4801]: I1124 21:25:14.946792 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" event={"ID":"0e5bc499-0e37-444d-9341-0f30dd8aaf4b","Type":"ContainerStarted","Data":"0fca6395e31036b1c2b04e88b495f72941ab3891082bfce30dfc73f6a0742267"} Nov 24 21:25:14 crc kubenswrapper[4801]: E1124 21:25:14.990860 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" podUID="a4c2438a-8323-4042-a1c2-2db0fb3fd096" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:14.998469 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" event={"ID":"9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433","Type":"ContainerStarted","Data":"4609c93ea203b1d7bc12dfb7064df94d87dd11de43d19cadb72854e657fbefd2"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.013894 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" podStartSLOduration=16.016861873 podStartE2EDuration="42.013863078s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.652397175 +0000 UTC m=+1048.734983845" lastFinishedPulling="2025-11-24 21:25:02.64939839 +0000 UTC m=+1074.731985050" observedRunningTime="2025-11-24 21:25:14.992306244 +0000 UTC m=+1087.074892914" watchObservedRunningTime="2025-11-24 21:25:15.013863078 +0000 UTC m=+1087.096449748" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.026617 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" event={"ID":"03fe9036-562f-47e8-94c6-c64f1e289895","Type":"ContainerStarted","Data":"b7ed42a92bb0aac2703b9c3ea2219eb6a2d95fbabed7638e7b9e1e81eca7529d"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.070426 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" event={"ID":"6072847d-a06e-4642-a120-d89098e76619","Type":"ContainerStarted","Data":"7b65459ac73dbff786d5ba7bb33d4fdee3146c03c1cee07bf25fdc0e81a65b03"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.104413 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" event={"ID":"8eaba32d-1c83-4c67-8202-329ed133d882","Type":"ContainerStarted","Data":"deacc2eeba63d7243d20441fc1f24fa9d3903c8adc7305746552b41e6c32bf66"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.142880 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" event={"ID":"776dffbf-70bd-40b3-a88d-241ca0870179","Type":"ContainerStarted","Data":"1f7eeaa47476113425ca7a1dd6748a0a341d3f24a7f640fbc9021e1aa85142f8"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.206724 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" event={"ID":"e0f6e49b-c86a-4d0f-b5fd-7c28c0859544","Type":"ContainerStarted","Data":"a7bbe00f29edcc785b2ede7528bb5000502a897a31a77e3ca51fddbf87fbe8c5"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.215202 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" event={"ID":"94a2b03f-55b0-4cae-b8f9-53babac8e9e4","Type":"ContainerStarted","Data":"9e3e3a082d748eff6547129e974af914853444479cfe824be39cc82a73b75dfe"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.216215 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.259539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" event={"ID":"82b04169-cd4a-4658-b875-88b342622816","Type":"ContainerStarted","Data":"433a780cabfdb391803558964b7c6fad5063f56d3355e5ee021b10873d638f78"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.260476 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.268102 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" podStartSLOduration=5.3739714020000005 podStartE2EDuration="42.268073805s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.711436381 +0000 UTC m=+1048.794023051" lastFinishedPulling="2025-11-24 21:25:13.605538784 +0000 UTC m=+1085.688125454" observedRunningTime="2025-11-24 21:25:15.261033274 +0000 UTC m=+1087.343619934" watchObservedRunningTime="2025-11-24 21:25:15.268073805 +0000 UTC m=+1087.350660475" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.280012 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" event={"ID":"fa7dcf85-ac60-4a43-beef-c92e1a597e4b","Type":"ContainerStarted","Data":"5dc5d5f08054902dce3fc94b0fd27e4f506647a55c689ed2f987dc2955dd968d"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.303446 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" podStartSLOduration=9.851364368 podStartE2EDuration="42.303416639s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.646198512 +0000 UTC m=+1048.728785182" lastFinishedPulling="2025-11-24 21:25:09.098250773 +0000 UTC m=+1081.180837453" observedRunningTime="2025-11-24 21:25:15.300851149 +0000 UTC m=+1087.383437819" watchObservedRunningTime="2025-11-24 21:25:15.303416639 +0000 UTC m=+1087.386003309" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.318642 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" event={"ID":"c652c759-8522-4638-a5e5-dcdcb965fa66","Type":"ContainerStarted","Data":"92d8596430a5f355da80bb41aaed9d2142c39841cfaf5b7bd689c33a6239c8a3"} Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.321241 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.322093 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" Nov 24 21:25:15 crc kubenswrapper[4801]: I1124 21:25:15.373138 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8bd5p" podStartSLOduration=3.986549312 podStartE2EDuration="42.373110478s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.022563457 +0000 UTC m=+1047.105150127" lastFinishedPulling="2025-11-24 21:25:13.409124623 +0000 UTC m=+1085.491711293" observedRunningTime="2025-11-24 21:25:15.353405292 +0000 UTC m=+1087.435991962" watchObservedRunningTime="2025-11-24 21:25:15.373110478 +0000 UTC m=+1087.455697148" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.328493 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" event={"ID":"aa9b27bf-234e-4116-8adf-68094684f237","Type":"ContainerStarted","Data":"7733682ce02ec4bcf5ea05c7360baed6cc434dbc1a2b2d05fc38843190a37158"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.330656 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.332106 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" event={"ID":"a4c2438a-8323-4042-a1c2-2db0fb3fd096","Type":"ContainerStarted","Data":"40da231089a8129cb0e37b9cabbf78bf657eccb0acc20b4d5c124ee9fc65b679"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.333649 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.334854 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" event={"ID":"0e5bc499-0e37-444d-9341-0f30dd8aaf4b","Type":"ContainerStarted","Data":"0401854b4cb89b33319bba536a49f9f17d7c1212f5efb70757448f7947a370c2"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.334958 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.337124 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" event={"ID":"9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433","Type":"ContainerStarted","Data":"aba9dd8f7fd67809cdeeaf548789889f23fcced5ea7c45e200632006cda4da2b"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.338288 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.343579 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" event={"ID":"776dffbf-70bd-40b3-a88d-241ca0870179","Type":"ContainerStarted","Data":"7e7be270a2bfc021fcecfc1fb81f021bffdc831ebc7095f49f520a207c71212f"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.344446 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.346682 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" event={"ID":"fab8c86f-5335-48e2-8272-fd4c04a1f28c","Type":"ContainerStarted","Data":"5d075fba36a9f0520b5df16c5792b0185e408c26d85117b5d7d60fa6867cc0d4"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.349334 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.351302 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" event={"ID":"16518a90-b12a-402b-982b-7649945e5d7b","Type":"ContainerStarted","Data":"aae3ac2a7905950cd469a4156def84fdf3f92571a0a71e65c3e4c97f3a4d3c70"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.353422 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.353795 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.353965 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.360269 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" event={"ID":"c8b5803e-6b9e-430b-a809-bd51c7ce77c8","Type":"ContainerStarted","Data":"5e59bd257f4ac3ae6980bf566e5095dcd5ad48557f3c5437b536270f7650943c"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.361507 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.363468 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" event={"ID":"57e84789-6801-4b26-9014-6735327f3559","Type":"ContainerStarted","Data":"7a8c89090e3c7444dcb09cffe69512080cc2d860880a33ae044e1c257efe4fbd"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.365880 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.369582 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.369725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" event={"ID":"fa7dcf85-ac60-4a43-beef-c92e1a597e4b","Type":"ContainerStarted","Data":"ee6ebfdf1fae076416e9406b3051ccdc3e31c00530ec6eed3e52051cf677bdf0"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.370587 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.380699 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-xw4kb" podStartSLOduration=5.850976843 podStartE2EDuration="43.380674495s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.941276355 +0000 UTC m=+1048.023863025" lastFinishedPulling="2025-11-24 21:25:13.470974007 +0000 UTC m=+1085.553560677" observedRunningTime="2025-11-24 21:25:16.37093018 +0000 UTC m=+1088.453516860" watchObservedRunningTime="2025-11-24 21:25:16.380674495 +0000 UTC m=+1088.463261165" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.382289 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" event={"ID":"8eaba32d-1c83-4c67-8202-329ed133d882","Type":"ContainerStarted","Data":"2b0d2ecf8774099d2f3e717e5b8d338e03b783c05bbdee8b76d40eec283d5324"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.383482 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.385061 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" event={"ID":"57efd675-3fd7-4f61-bff4-f47645a37c1d","Type":"ContainerStarted","Data":"d3c041a95e03e250c54a955fdfa96a8dbba517a9d51e4ee7f5da13a1fbc1fb27"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.387289 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.391322 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.392933 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" event={"ID":"a5c61999-f3db-4d45-bb33-8b25d09cb675","Type":"ContainerStarted","Data":"355ff1f6df2e55cdccb1009e53c3873dda6e12bbfb37177d19651f3c48594e8a"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.395064 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.397802 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.399672 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" event={"ID":"6072847d-a06e-4642-a120-d89098e76619","Type":"ContainerStarted","Data":"817f4187ecef264e55ee9717c6f6c85aeec8e94e865296d5c38667f8a684794f"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.400505 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.417648 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" event={"ID":"e0f6e49b-c86a-4d0f-b5fd-7c28c0859544","Type":"ContainerStarted","Data":"1f47945f1fee21df1d2a8c327a850c32db584e0520924f33da82cb4a62dd478e"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.418425 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.422119 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" event={"ID":"f8658668-cf48-4465-b119-dcc386aea963","Type":"ContainerStarted","Data":"4fad40256169bcb2315f66fd874c0e5989d5b63fedee01f9ef3e7ba126ec736d"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.424067 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.429662 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.430977 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" podStartSLOduration=6.173809266 podStartE2EDuration="43.430937096s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.927195015 +0000 UTC m=+1048.009781685" lastFinishedPulling="2025-11-24 21:25:13.184322845 +0000 UTC m=+1085.266909515" observedRunningTime="2025-11-24 21:25:16.416163944 +0000 UTC m=+1088.498750614" watchObservedRunningTime="2025-11-24 21:25:16.430937096 +0000 UTC m=+1088.513523776" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.436566 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" event={"ID":"03fe9036-562f-47e8-94c6-c64f1e289895","Type":"ContainerStarted","Data":"09142a2c0cf0328b2a6dde5c1002f1481bde220079dc8431b06c32cbe08a6e3c"} Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.436614 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.465898 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" podStartSLOduration=4.286179827 podStartE2EDuration="43.465867918s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.610300729 +0000 UTC m=+1048.692887429" lastFinishedPulling="2025-11-24 21:25:15.78998885 +0000 UTC m=+1087.872575520" observedRunningTime="2025-11-24 21:25:16.452516101 +0000 UTC m=+1088.535102771" watchObservedRunningTime="2025-11-24 21:25:16.465867918 +0000 UTC m=+1088.548454588" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.485902 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jmtpv" podStartSLOduration=6.170430071 podStartE2EDuration="43.485880684s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.916095788 +0000 UTC m=+1047.998682458" lastFinishedPulling="2025-11-24 21:25:13.231546401 +0000 UTC m=+1085.314133071" observedRunningTime="2025-11-24 21:25:16.47550855 +0000 UTC m=+1088.558095220" watchObservedRunningTime="2025-11-24 21:25:16.485880684 +0000 UTC m=+1088.568467354" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.539058 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-q4x69" podStartSLOduration=5.670260265 podStartE2EDuration="43.539032095s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.60167329 +0000 UTC m=+1047.684259970" lastFinishedPulling="2025-11-24 21:25:13.47044513 +0000 UTC m=+1085.553031800" observedRunningTime="2025-11-24 21:25:16.535874116 +0000 UTC m=+1088.618460786" watchObservedRunningTime="2025-11-24 21:25:16.539032095 +0000 UTC m=+1088.621618765" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.582655 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" podStartSLOduration=4.673901827 podStartE2EDuration="43.582626217s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.635656502 +0000 UTC m=+1048.718243172" lastFinishedPulling="2025-11-24 21:25:15.544380892 +0000 UTC m=+1087.626967562" observedRunningTime="2025-11-24 21:25:16.562625913 +0000 UTC m=+1088.645212583" watchObservedRunningTime="2025-11-24 21:25:16.582626217 +0000 UTC m=+1088.665212887" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.737292 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mvmfd" podStartSLOduration=6.091442841 podStartE2EDuration="43.737266582s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.840461714 +0000 UTC m=+1047.923048384" lastFinishedPulling="2025-11-24 21:25:13.486285455 +0000 UTC m=+1085.568872125" observedRunningTime="2025-11-24 21:25:16.718868177 +0000 UTC m=+1088.801454847" watchObservedRunningTime="2025-11-24 21:25:16.737266582 +0000 UTC m=+1088.819853252" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.788130 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" podStartSLOduration=4.822782353 podStartE2EDuration="43.788099941s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.464231254 +0000 UTC m=+1048.546817924" lastFinishedPulling="2025-11-24 21:25:15.429548842 +0000 UTC m=+1087.512135512" observedRunningTime="2025-11-24 21:25:16.767565559 +0000 UTC m=+1088.850152229" watchObservedRunningTime="2025-11-24 21:25:16.788099941 +0000 UTC m=+1088.870686611" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.829776 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" podStartSLOduration=6.298478912 podStartE2EDuration="43.829746153s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:38.261486036 +0000 UTC m=+1050.344072696" lastFinishedPulling="2025-11-24 21:25:15.792753277 +0000 UTC m=+1087.875339937" observedRunningTime="2025-11-24 21:25:16.81429448 +0000 UTC m=+1088.896881150" watchObservedRunningTime="2025-11-24 21:25:16.829746153 +0000 UTC m=+1088.912332823" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.866101 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" podStartSLOduration=3.678363117 podStartE2EDuration="43.866070838s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.603790066 +0000 UTC m=+1047.686376746" lastFinishedPulling="2025-11-24 21:25:15.791497797 +0000 UTC m=+1087.874084467" observedRunningTime="2025-11-24 21:25:16.864883412 +0000 UTC m=+1088.947470082" watchObservedRunningTime="2025-11-24 21:25:16.866070838 +0000 UTC m=+1088.948657518" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.926769 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" podStartSLOduration=7.313665898 podStartE2EDuration="43.926750315s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.539636471 +0000 UTC m=+1048.622223141" lastFinishedPulling="2025-11-24 21:25:13.152720888 +0000 UTC m=+1085.235307558" observedRunningTime="2025-11-24 21:25:16.92178624 +0000 UTC m=+1089.004372910" watchObservedRunningTime="2025-11-24 21:25:16.926750315 +0000 UTC m=+1089.009336985" Nov 24 21:25:16 crc kubenswrapper[4801]: I1124 21:25:16.992592 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wjm9k" podStartSLOduration=6.964862774 podStartE2EDuration="43.992570973s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.442510954 +0000 UTC m=+1048.525097614" lastFinishedPulling="2025-11-24 21:25:13.470219143 +0000 UTC m=+1085.552805813" observedRunningTime="2025-11-24 21:25:16.992397537 +0000 UTC m=+1089.074984207" watchObservedRunningTime="2025-11-24 21:25:16.992570973 +0000 UTC m=+1089.075157643" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.038063 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-97z5m" podStartSLOduration=6.152110058 podStartE2EDuration="44.038045525s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.584902105 +0000 UTC m=+1047.667488775" lastFinishedPulling="2025-11-24 21:25:13.470837572 +0000 UTC m=+1085.553424242" observedRunningTime="2025-11-24 21:25:17.036424363 +0000 UTC m=+1089.119011043" watchObservedRunningTime="2025-11-24 21:25:17.038045525 +0000 UTC m=+1089.120632195" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.063866 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-884lt" podStartSLOduration=6.388235709 podStartE2EDuration="44.063847861s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.861972077 +0000 UTC m=+1047.944558747" lastFinishedPulling="2025-11-24 21:25:13.537584229 +0000 UTC m=+1085.620170899" observedRunningTime="2025-11-24 21:25:17.061150826 +0000 UTC m=+1089.143737496" watchObservedRunningTime="2025-11-24 21:25:17.063847861 +0000 UTC m=+1089.146434521" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.088965 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" podStartSLOduration=4.2408186 podStartE2EDuration="44.088935876s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.944084313 +0000 UTC m=+1048.026670973" lastFinishedPulling="2025-11-24 21:25:15.792201579 +0000 UTC m=+1087.874788249" observedRunningTime="2025-11-24 21:25:17.083728632 +0000 UTC m=+1089.166315292" watchObservedRunningTime="2025-11-24 21:25:17.088935876 +0000 UTC m=+1089.171522546" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.134098 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" podStartSLOduration=39.176401197 podStartE2EDuration="44.134072576s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:25:08.179906735 +0000 UTC m=+1080.262493435" lastFinishedPulling="2025-11-24 21:25:13.137578144 +0000 UTC m=+1085.220164814" observedRunningTime="2025-11-24 21:25:17.129222545 +0000 UTC m=+1089.211809215" watchObservedRunningTime="2025-11-24 21:25:17.134072576 +0000 UTC m=+1089.216659236" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.449350 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" event={"ID":"a4c2438a-8323-4042-a1c2-2db0fb3fd096","Type":"ContainerStarted","Data":"9c6d0b1722d5081672aa5a3de065272efab8b3ca9823cb950ac1cc2129409ec0"} Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.454972 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.456343 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jpl9z" Nov 24 21:25:17 crc kubenswrapper[4801]: I1124 21:25:17.477433 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" podStartSLOduration=3.248899462 podStartE2EDuration="44.477397379s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:35.558396806 +0000 UTC m=+1047.640983476" lastFinishedPulling="2025-11-24 21:25:16.786894723 +0000 UTC m=+1088.869481393" observedRunningTime="2025-11-24 21:25:17.469345557 +0000 UTC m=+1089.551932227" watchObservedRunningTime="2025-11-24 21:25:17.477397379 +0000 UTC m=+1089.559984049" Nov 24 21:25:19 crc kubenswrapper[4801]: E1124 21:25:19.665950 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" podUID="45ac145c-4f11-43ca-81c3-7b56c357ce5d" Nov 24 21:25:21 crc kubenswrapper[4801]: I1124 21:25:21.845699 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-84qdp" Nov 24 21:25:23 crc kubenswrapper[4801]: I1124 21:25:23.381846 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-nf9rx" Nov 24 21:25:23 crc kubenswrapper[4801]: I1124 21:25:23.459886 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-zvrq5" Nov 24 21:25:23 crc kubenswrapper[4801]: I1124 21:25:23.680857 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-mzknb" Nov 24 21:25:23 crc kubenswrapper[4801]: I1124 21:25:23.717837 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-kdtqx" Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.293009 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-z89h9" Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.320166 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.320483 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.683112 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-7wgw6" Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.725960 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-n4wlr" Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.769118 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-jhrrw" Nov 24 21:25:24 crc kubenswrapper[4801]: I1124 21:25:24.979929 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5c7cd5746d-lcfhv" Nov 24 21:25:25 crc kubenswrapper[4801]: I1124 21:25:25.017501 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-4lmlj" Nov 24 21:25:27 crc kubenswrapper[4801]: I1124 21:25:27.492133 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-w8j9x" Nov 24 21:25:31 crc kubenswrapper[4801]: I1124 21:25:31.603915 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" event={"ID":"45ac145c-4f11-43ca-81c3-7b56c357ce5d","Type":"ContainerStarted","Data":"09b37016d228bccc5d9ea19fbe98b620681b489638ca52c2de9430396d7503b7"} Nov 24 21:25:31 crc kubenswrapper[4801]: I1124 21:25:31.631784 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9skrc" podStartSLOduration=4.124809898 podStartE2EDuration="58.631760659s" podCreationTimestamp="2025-11-24 21:24:33 +0000 UTC" firstStartedPulling="2025-11-24 21:24:36.652626382 +0000 UTC m=+1048.735213052" lastFinishedPulling="2025-11-24 21:25:31.159577103 +0000 UTC m=+1103.242163813" observedRunningTime="2025-11-24 21:25:31.627185716 +0000 UTC m=+1103.709772426" watchObservedRunningTime="2025-11-24 21:25:31.631760659 +0000 UTC m=+1103.714347329" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.537680 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v4t7w"] Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.539880 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.544924 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.545235 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.545744 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.547709 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ktgck" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.561283 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v4t7w"] Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.621783 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pcjww"] Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.641715 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.646882 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pcjww"] Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.649476 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.707662 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7w5\" (UniqueName: \"kubernetes.io/projected/6ebaafa4-545b-4839-8228-7f09acfc53e2-kube-api-access-vw7w5\") pod \"dnsmasq-dns-675f4bcbfc-v4t7w\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.707734 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebaafa4-545b-4839-8228-7f09acfc53e2-config\") pod \"dnsmasq-dns-675f4bcbfc-v4t7w\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.809393 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhl5w\" (UniqueName: \"kubernetes.io/projected/351c1559-8490-41f1-a436-5b9ab663da7c-kube-api-access-vhl5w\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.809472 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7w5\" (UniqueName: \"kubernetes.io/projected/6ebaafa4-545b-4839-8228-7f09acfc53e2-kube-api-access-vw7w5\") pod \"dnsmasq-dns-675f4bcbfc-v4t7w\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.809606 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebaafa4-545b-4839-8228-7f09acfc53e2-config\") pod \"dnsmasq-dns-675f4bcbfc-v4t7w\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.809700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.809758 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-config\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.811405 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebaafa4-545b-4839-8228-7f09acfc53e2-config\") pod \"dnsmasq-dns-675f4bcbfc-v4t7w\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.836720 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7w5\" (UniqueName: \"kubernetes.io/projected/6ebaafa4-545b-4839-8228-7f09acfc53e2-kube-api-access-vw7w5\") pod \"dnsmasq-dns-675f4bcbfc-v4t7w\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.870729 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.913120 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhl5w\" (UniqueName: \"kubernetes.io/projected/351c1559-8490-41f1-a436-5b9ab663da7c-kube-api-access-vhl5w\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.913233 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.913293 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-config\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.914248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.914593 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-config\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.938055 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhl5w\" (UniqueName: \"kubernetes.io/projected/351c1559-8490-41f1-a436-5b9ab663da7c-kube-api-access-vhl5w\") pod \"dnsmasq-dns-78dd6ddcc-pcjww\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:50 crc kubenswrapper[4801]: I1124 21:25:50.986822 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:25:51 crc kubenswrapper[4801]: I1124 21:25:51.384609 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v4t7w"] Nov 24 21:25:51 crc kubenswrapper[4801]: W1124 21:25:51.384996 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ebaafa4_545b_4839_8228_7f09acfc53e2.slice/crio-cdc38c2416e5a2ca816ebbafd1e8cd8af011426bd22f2f2fb6e5f51fd90282b1 WatchSource:0}: Error finding container cdc38c2416e5a2ca816ebbafd1e8cd8af011426bd22f2f2fb6e5f51fd90282b1: Status 404 returned error can't find the container with id cdc38c2416e5a2ca816ebbafd1e8cd8af011426bd22f2f2fb6e5f51fd90282b1 Nov 24 21:25:51 crc kubenswrapper[4801]: I1124 21:25:51.512526 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pcjww"] Nov 24 21:25:51 crc kubenswrapper[4801]: W1124 21:25:51.516489 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod351c1559_8490_41f1_a436_5b9ab663da7c.slice/crio-5162444d2f0b2aa4faeca8b6ffcab0c1168e0d6d1b39d8a0ac0fe364867669eb WatchSource:0}: Error finding container 5162444d2f0b2aa4faeca8b6ffcab0c1168e0d6d1b39d8a0ac0fe364867669eb: Status 404 returned error can't find the container with id 5162444d2f0b2aa4faeca8b6ffcab0c1168e0d6d1b39d8a0ac0fe364867669eb Nov 24 21:25:51 crc kubenswrapper[4801]: I1124 21:25:51.879787 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" event={"ID":"351c1559-8490-41f1-a436-5b9ab663da7c","Type":"ContainerStarted","Data":"5162444d2f0b2aa4faeca8b6ffcab0c1168e0d6d1b39d8a0ac0fe364867669eb"} Nov 24 21:25:51 crc kubenswrapper[4801]: I1124 21:25:51.881278 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" event={"ID":"6ebaafa4-545b-4839-8228-7f09acfc53e2","Type":"ContainerStarted","Data":"cdc38c2416e5a2ca816ebbafd1e8cd8af011426bd22f2f2fb6e5f51fd90282b1"} Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.516885 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v4t7w"] Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.542160 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fstrk"] Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.545546 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.550708 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fstrk"] Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.685115 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-config\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.685670 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.685786 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdrsz\" (UniqueName: \"kubernetes.io/projected/bc7011d4-ca32-48d0-a29d-a42554930a07-kube-api-access-fdrsz\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.787204 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-config\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.787429 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.787463 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdrsz\" (UniqueName: \"kubernetes.io/projected/bc7011d4-ca32-48d0-a29d-a42554930a07-kube-api-access-fdrsz\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.789697 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-config\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.790718 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.812706 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdrsz\" (UniqueName: \"kubernetes.io/projected/bc7011d4-ca32-48d0-a29d-a42554930a07-kube-api-access-fdrsz\") pod \"dnsmasq-dns-666b6646f7-fstrk\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.855433 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pcjww"] Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.884346 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ccvps"] Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.891697 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.893811 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.909937 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ccvps"] Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.997679 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltfb\" (UniqueName: \"kubernetes.io/projected/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-kube-api-access-nltfb\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.998187 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-config\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:53 crc kubenswrapper[4801]: I1124 21:25:53.998384 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.101625 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-config\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.101833 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.102063 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltfb\" (UniqueName: \"kubernetes.io/projected/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-kube-api-access-nltfb\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.104481 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.118512 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-config\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.122934 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nltfb\" (UniqueName: \"kubernetes.io/projected/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-kube-api-access-nltfb\") pod \"dnsmasq-dns-57d769cc4f-ccvps\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.227034 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.320280 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.320745 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.320805 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.321982 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26dfa37555d46211186d9faaf879ca9711d8f0944f7938e017dd598eb1c35e3b"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.322038 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://26dfa37555d46211186d9faaf879ca9711d8f0944f7938e017dd598eb1c35e3b" gracePeriod=600 Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.514379 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fstrk"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.708195 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.710328 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.711916 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.712867 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.717801 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.718048 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-22wn8" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.718167 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.718321 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.718592 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.718750 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.723325 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.730098 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.732778 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.756978 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.763777 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:25:54 crc kubenswrapper[4801]: W1124 21:25:54.770629 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ea9d5_f483_4b62_a4fe_fdb7f0f2dc6b.slice/crio-2544723fa90fe2f84df6a94ee22376894a0565b79afbadbcbf4e7b68cf6908ac WatchSource:0}: Error finding container 2544723fa90fe2f84df6a94ee22376894a0565b79afbadbcbf4e7b68cf6908ac: Status 404 returned error can't find the container with id 2544723fa90fe2f84df6a94ee22376894a0565b79afbadbcbf4e7b68cf6908ac Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.780206 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ccvps"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.801235 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819121 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819277 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb8472fa-9a35-4787-b38c-0c657881d910-pod-info\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819397 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819419 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819437 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819500 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8j5\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-kube-api-access-dn8j5\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819533 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819558 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819583 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-config-data\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819609 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819681 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb8472fa-9a35-4787-b38c-0c657881d910-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819742 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819784 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56f017-0f5c-4eb2-b3be-44db75365483-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819827 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkpj\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-kube-api-access-dnkpj\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819858 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819891 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819915 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-server-conf\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819933 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819957 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.819999 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56f017-0f5c-4eb2-b3be-44db75365483-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922281 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922335 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922438 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922507 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922559 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922597 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb8472fa-9a35-4787-b38c-0c657881d910-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.922633 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923517 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56f017-0f5c-4eb2-b3be-44db75365483-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923548 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923581 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923593 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923615 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkpj\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-kube-api-access-dnkpj\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923664 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-server-conf\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923696 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923746 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-server-conf\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923765 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923788 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af143054-b9a1-432a-a0f8-9f489550bd24-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923809 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923839 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56f017-0f5c-4eb2-b3be-44db75365483-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923869 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923899 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb8472fa-9a35-4787-b38c-0c657881d910-pod-info\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923919 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923938 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.923981 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af143054-b9a1-432a-a0f8-9f489550bd24-pod-info\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924001 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8j5\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-kube-api-access-dn8j5\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924027 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924055 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-config-data\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924077 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924099 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924119 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbmc\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-kube-api-access-8bbmc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924144 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-config-data\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.924937 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-config-data\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.926041 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" event={"ID":"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b","Type":"ContainerStarted","Data":"2544723fa90fe2f84df6a94ee22376894a0565b79afbadbcbf4e7b68cf6908ac"} Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.926860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.928042 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.928400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.928977 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.929142 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.929219 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.929645 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-server-conf\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.930412 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.930486 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.930582 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb8472fa-9a35-4787-b38c-0c657881d910-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.931186 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56f017-0f5c-4eb2-b3be-44db75365483-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.932718 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.933226 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.933652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" event={"ID":"bc7011d4-ca32-48d0-a29d-a42554930a07","Type":"ContainerStarted","Data":"e7c7756239e38b815165c15fffaa506b83be45f54032358e11477cf02cd598b0"} Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.936245 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.937769 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="26dfa37555d46211186d9faaf879ca9711d8f0944f7938e017dd598eb1c35e3b" exitCode=0 Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.937820 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"26dfa37555d46211186d9faaf879ca9711d8f0944f7938e017dd598eb1c35e3b"} Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.937865 4801 scope.go:117] "RemoveContainer" containerID="8919cd9122ea8e468b9aac6663ba78df15883fda40442e460d8b6e6a81f4e98c" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.937876 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb8472fa-9a35-4787-b38c-0c657881d910-pod-info\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.938539 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56f017-0f5c-4eb2-b3be-44db75365483-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.938549 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.943250 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.948194 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8j5\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-kube-api-access-dn8j5\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.953860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkpj\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-kube-api-access-dnkpj\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.961889 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " pod="openstack/rabbitmq-server-0" Nov 24 21:25:54 crc kubenswrapper[4801]: I1124 21:25:54.979062 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " pod="openstack/rabbitmq-server-1" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af143054-b9a1-432a-a0f8-9f489550bd24-pod-info\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026473 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-config-data\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026524 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bbmc\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-kube-api-access-8bbmc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026585 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026652 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026670 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026697 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026719 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-server-conf\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.026759 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af143054-b9a1-432a-a0f8-9f489550bd24-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.030859 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af143054-b9a1-432a-a0f8-9f489550bd24-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.031616 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.031943 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.032071 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.032880 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af143054-b9a1-432a-a0f8-9f489550bd24-pod-info\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.033000 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.034089 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-config-data\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.034290 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-server-conf\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.036745 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.044111 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.044253 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.053272 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bbmc\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-kube-api-access-8bbmc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.060122 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.063166 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.069339 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.071523 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.075933 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.076142 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.076260 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.076423 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xptr6" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.076268 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.076795 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.076832 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.077489 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.080547 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128518 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128587 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128643 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128747 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128789 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/052262eb-3362-4169-a9e2-96e364d20be8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128818 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128856 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.128933 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxrm\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-kube-api-access-4cxrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.129032 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.129096 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.129124 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/052262eb-3362-4169-a9e2-96e364d20be8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232409 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232464 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/052262eb-3362-4169-a9e2-96e364d20be8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232487 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232518 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232561 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232592 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232618 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/052262eb-3362-4169-a9e2-96e364d20be8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232637 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232661 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232687 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cxrm\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-kube-api-access-4cxrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.232741 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.234414 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.234601 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.234842 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.235325 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.235678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.236658 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.243221 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/052262eb-3362-4169-a9e2-96e364d20be8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.244080 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.250252 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.252510 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/052262eb-3362-4169-a9e2-96e364d20be8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.254822 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cxrm\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-kube-api-access-4cxrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.279605 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.444456 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.665642 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.866433 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.973318 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.980779 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"af143054-b9a1-432a-a0f8-9f489550bd24","Type":"ContainerStarted","Data":"488474bfd44a78f699ce79584dfecc8b7e8a94969879cc9005adbf8855dbb746"} Nov 24 21:25:55 crc kubenswrapper[4801]: I1124 21:25:55.989451 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"71554722c44235bb81cd2780183b2b3394df41c31c6f1cdedb2967dd32989a7b"} Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.009022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"fb8472fa-9a35-4787-b38c-0c657881d910","Type":"ContainerStarted","Data":"150da886bc1eddc1921b1f00f6dce49d91efa12a34c22ffd3cb8409a921252e3"} Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.137960 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:25:56 crc kubenswrapper[4801]: W1124 21:25:56.150984 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod052262eb_3362_4169_a9e2_96e364d20be8.slice/crio-d95f1d7145b0da0abfff345a22a0d35cd42e537d13c7ae17a6ff6c0afb7436ac WatchSource:0}: Error finding container d95f1d7145b0da0abfff345a22a0d35cd42e537d13c7ae17a6ff6c0afb7436ac: Status 404 returned error can't find the container with id d95f1d7145b0da0abfff345a22a0d35cd42e537d13c7ae17a6ff6c0afb7436ac Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.498672 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.500584 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.504065 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.504578 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rvt9j" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.505226 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.518214 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.542943 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.545807 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.589615 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-kolla-config\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.589714 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5slzw\" (UniqueName: \"kubernetes.io/projected/c6e61e62-b039-4898-b4fa-f20160b67641-kube-api-access-5slzw\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.589867 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.590129 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e61e62-b039-4898-b4fa-f20160b67641-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.590165 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e61e62-b039-4898-b4fa-f20160b67641-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.590209 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.590784 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6e61e62-b039-4898-b4fa-f20160b67641-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.590831 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-config-data-default\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.694834 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6e61e62-b039-4898-b4fa-f20160b67641-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.694891 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-config-data-default\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.696766 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-kolla-config\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.696805 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5slzw\" (UniqueName: \"kubernetes.io/projected/c6e61e62-b039-4898-b4fa-f20160b67641-kube-api-access-5slzw\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.696832 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.696899 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e61e62-b039-4898-b4fa-f20160b67641-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.696921 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e61e62-b039-4898-b4fa-f20160b67641-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.696944 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.697278 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.697926 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6e61e62-b039-4898-b4fa-f20160b67641-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.698738 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-kolla-config\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.698782 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-config-data-default\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.701451 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e61e62-b039-4898-b4fa-f20160b67641-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.705645 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e61e62-b039-4898-b4fa-f20160b67641-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.712179 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e61e62-b039-4898-b4fa-f20160b67641-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.720478 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5slzw\" (UniqueName: \"kubernetes.io/projected/c6e61e62-b039-4898-b4fa-f20160b67641-kube-api-access-5slzw\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.760637 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c6e61e62-b039-4898-b4fa-f20160b67641\") " pod="openstack/openstack-galera-0" Nov 24 21:25:56 crc kubenswrapper[4801]: I1124 21:25:56.895236 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.038347 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56f017-0f5c-4eb2-b3be-44db75365483","Type":"ContainerStarted","Data":"443604bfe22f48240d3d77b03fcb5449fb5bc0851124b26d31f57e9caddf6c4a"} Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.043949 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"052262eb-3362-4169-a9e2-96e364d20be8","Type":"ContainerStarted","Data":"d95f1d7145b0da0abfff345a22a0d35cd42e537d13c7ae17a6ff6c0afb7436ac"} Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.766491 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:25:57 crc kubenswrapper[4801]: W1124 21:25:57.790083 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e61e62_b039_4898_b4fa_f20160b67641.slice/crio-4eb8bf261ef3ab25cfc36f524d36593373f4205ab3c5e6d49b2f363448c8c5bb WatchSource:0}: Error finding container 4eb8bf261ef3ab25cfc36f524d36593373f4205ab3c5e6d49b2f363448c8c5bb: Status 404 returned error can't find the container with id 4eb8bf261ef3ab25cfc36f524d36593373f4205ab3c5e6d49b2f363448c8c5bb Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.921210 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.925883 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.936209 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.936568 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9ns2h" Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.936688 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.936763 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 21:25:57 crc kubenswrapper[4801]: I1124 21:25:57.980295 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.119762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.119878 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.119938 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnpd2\" (UniqueName: \"kubernetes.io/projected/bd963d5f-9d48-4924-a44c-d3a97a3e6461-kube-api-access-cnpd2\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.120007 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.120203 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.120358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd963d5f-9d48-4924-a44c-d3a97a3e6461-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.120856 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd963d5f-9d48-4924-a44c-d3a97a3e6461-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.120958 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd963d5f-9d48-4924-a44c-d3a97a3e6461-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.145996 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6e61e62-b039-4898-b4fa-f20160b67641","Type":"ContainerStarted","Data":"4eb8bf261ef3ab25cfc36f524d36593373f4205ab3c5e6d49b2f363448c8c5bb"} Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223706 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223799 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd963d5f-9d48-4924-a44c-d3a97a3e6461-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223873 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd963d5f-9d48-4924-a44c-d3a97a3e6461-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223904 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd963d5f-9d48-4924-a44c-d3a97a3e6461-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223939 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223972 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.223993 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnpd2\" (UniqueName: \"kubernetes.io/projected/bd963d5f-9d48-4924-a44c-d3a97a3e6461-kube-api-access-cnpd2\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.224009 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.225248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.229829 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd963d5f-9d48-4924-a44c-d3a97a3e6461-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.231382 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.244482 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.245555 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd963d5f-9d48-4924-a44c-d3a97a3e6461-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.281821 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd963d5f-9d48-4924-a44c-d3a97a3e6461-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.292204 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnpd2\" (UniqueName: \"kubernetes.io/projected/bd963d5f-9d48-4924-a44c-d3a97a3e6461-kube-api-access-cnpd2\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.295063 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd963d5f-9d48-4924-a44c-d3a97a3e6461-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.322715 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bd963d5f-9d48-4924-a44c-d3a97a3e6461\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.513284 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.522973 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.527636 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vtzsj" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.529959 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.530102 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.537348 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-config-data\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.537459 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-kolla-config\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.537533 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrbf\" (UniqueName: \"kubernetes.io/projected/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-kube-api-access-gdrbf\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.537581 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.537606 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.537521 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.563491 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.643334 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-config-data\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.649335 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-config-data\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.649532 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-kolla-config\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.649734 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdrbf\" (UniqueName: \"kubernetes.io/projected/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-kube-api-access-gdrbf\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.649837 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.649864 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.652887 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-kolla-config\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.686552 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.703592 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdrbf\" (UniqueName: \"kubernetes.io/projected/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-kube-api-access-gdrbf\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.734644 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a25adc5-f2a3-44b0-aeb6-8a45707600fa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9a25adc5-f2a3-44b0-aeb6-8a45707600fa\") " pod="openstack/memcached-0" Nov 24 21:25:58 crc kubenswrapper[4801]: I1124 21:25:58.892921 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 21:25:59 crc kubenswrapper[4801]: I1124 21:25:59.297953 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:25:59 crc kubenswrapper[4801]: I1124 21:25:59.726701 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.348677 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9a25adc5-f2a3-44b0-aeb6-8a45707600fa","Type":"ContainerStarted","Data":"3c06e44ef065676cf66f8d1c6be11ba9d7ef3854b8f50a40ddb5ec0f0d2d6056"} Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.357855 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bd963d5f-9d48-4924-a44c-d3a97a3e6461","Type":"ContainerStarted","Data":"8490f2072bc4645908ebed19d6abae6c320f02e2b6a7f1c4b5e67b7505cad78e"} Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.713323 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.714701 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.714795 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.718557 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-n8pm7" Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.745291 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9gz7\" (UniqueName: \"kubernetes.io/projected/41c9700a-0355-4e6d-82d0-934fc45f5d52-kube-api-access-p9gz7\") pod \"kube-state-metrics-0\" (UID: \"41c9700a-0355-4e6d-82d0-934fc45f5d52\") " pod="openstack/kube-state-metrics-0" Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.855170 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9gz7\" (UniqueName: \"kubernetes.io/projected/41c9700a-0355-4e6d-82d0-934fc45f5d52-kube-api-access-p9gz7\") pod \"kube-state-metrics-0\" (UID: \"41c9700a-0355-4e6d-82d0-934fc45f5d52\") " pod="openstack/kube-state-metrics-0" Nov 24 21:26:00 crc kubenswrapper[4801]: I1124 21:26:00.914249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9gz7\" (UniqueName: \"kubernetes.io/projected/41c9700a-0355-4e6d-82d0-934fc45f5d52-kube-api-access-p9gz7\") pod \"kube-state-metrics-0\" (UID: \"41c9700a-0355-4e6d-82d0-934fc45f5d52\") " pod="openstack/kube-state-metrics-0" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.086113 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.628760 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g"] Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.630810 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.638969 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.654275 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-xq54m" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.670233 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g"] Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.712810 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2cs\" (UniqueName: \"kubernetes.io/projected/7deaf86e-7ea9-45ca-9f31-787c92a15400-kube-api-access-ch2cs\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.713358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7deaf86e-7ea9-45ca-9f31-787c92a15400-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.823967 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2cs\" (UniqueName: \"kubernetes.io/projected/7deaf86e-7ea9-45ca-9f31-787c92a15400-kube-api-access-ch2cs\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.824026 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7deaf86e-7ea9-45ca-9f31-787c92a15400-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:01 crc kubenswrapper[4801]: E1124 21:26:01.824191 4801 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 24 21:26:01 crc kubenswrapper[4801]: E1124 21:26:01.824271 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7deaf86e-7ea9-45ca-9f31-787c92a15400-serving-cert podName:7deaf86e-7ea9-45ca-9f31-787c92a15400 nodeName:}" failed. No retries permitted until 2025-11-24 21:26:02.324247288 +0000 UTC m=+1134.406833958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7deaf86e-7ea9-45ca-9f31-787c92a15400-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-xjv9g" (UID: "7deaf86e-7ea9-45ca-9f31-787c92a15400") : secret "observability-ui-dashboards" not found Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.860153 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2cs\" (UniqueName: \"kubernetes.io/projected/7deaf86e-7ea9-45ca-9f31-787c92a15400-kube-api-access-ch2cs\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.978412 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7db5d7468f-zm4f5"] Nov 24 21:26:01 crc kubenswrapper[4801]: I1124 21:26:01.980002 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:01.999979 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7db5d7468f-zm4f5"] Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043618 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-config\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043696 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-oauth-config\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043741 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-trusted-ca-bundle\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043771 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnbd\" (UniqueName: \"kubernetes.io/projected/5c1d851c-1a26-46c1-8895-c27cdfc03881-kube-api-access-5rnbd\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043856 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-oauth-serving-cert\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043926 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-serving-cert\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.043956 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-service-ca\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.055799 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.061878 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.071975 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-677dx" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.073142 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.073295 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.077742 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.078289 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.084731 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.098222 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.152650 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbadfbdd-010e-4ce4-bc42-8871dc88b990-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.153199 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.153293 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-serving-cert\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.162883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-service-ca\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.162964 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163123 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-config\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163265 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-oauth-config\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163393 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-trusted-ca-bundle\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163450 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7vw\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-kube-api-access-qk7vw\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rnbd\" (UniqueName: \"kubernetes.io/projected/5c1d851c-1a26-46c1-8895-c27cdfc03881-kube-api-access-5rnbd\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163631 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163687 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-oauth-serving-cert\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.163756 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.164954 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-service-ca\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.165642 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-config\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.171752 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-oauth-serving-cert\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.176422 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c1d851c-1a26-46c1-8895-c27cdfc03881-trusted-ca-bundle\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.177193 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-oauth-config\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.217700 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rnbd\" (UniqueName: \"kubernetes.io/projected/5c1d851c-1a26-46c1-8895-c27cdfc03881-kube-api-access-5rnbd\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.223187 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1d851c-1a26-46c1-8895-c27cdfc03881-console-serving-cert\") pod \"console-7db5d7468f-zm4f5\" (UID: \"5c1d851c-1a26-46c1-8895-c27cdfc03881\") " pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.242212 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.267714 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7vw\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-kube-api-access-qk7vw\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.268189 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.268348 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.268522 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbadfbdd-010e-4ce4-bc42-8871dc88b990-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.268636 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.268795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.268912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.269074 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.275201 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.282025 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.282901 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.283273 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.283334 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac9cfb283f52eb6a18303fe393c9cece8e9a2d177b89c99352e5073eb15e7e54/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.284400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.294380 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbadfbdd-010e-4ce4-bc42-8871dc88b990-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.306800 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.318307 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7vw\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-kube-api-access-qk7vw\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.322239 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.367201 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.375169 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7deaf86e-7ea9-45ca-9f31-787c92a15400-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.388064 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7deaf86e-7ea9-45ca-9f31-787c92a15400-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-xjv9g\" (UID: \"7deaf86e-7ea9-45ca-9f31-787c92a15400\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.422379 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41c9700a-0355-4e6d-82d0-934fc45f5d52","Type":"ContainerStarted","Data":"24812beac58c05e6d8e5f42e59699163b6043b985234eb78a3fae7106a2b1670"} Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.448478 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:26:02 crc kubenswrapper[4801]: I1124 21:26:02.616226 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.368779 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qm4tj"] Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.372295 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.374680 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6fs5g" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.377740 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.377839 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.400414 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sbns4"] Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.417082 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.417959 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qm4tj"] Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.429511 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.449711 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sbns4"] Nov 24 21:26:03 crc kubenswrapper[4801]: W1124 21:26:03.477661 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbadfbdd_010e_4ce4_bc42_8871dc88b990.slice/crio-6e004288a626364ed7e5756ed186872b82cbf05a89202506f1dc681149b9e8ef WatchSource:0}: Error finding container 6e004288a626364ed7e5756ed186872b82cbf05a89202506f1dc681149b9e8ef: Status 404 returned error can't find the container with id 6e004288a626364ed7e5756ed186872b82cbf05a89202506f1dc681149b9e8ef Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515292 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-run\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515425 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5520b43a-322d-44eb-87d5-b35af1ad70bc-scripts\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515474 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwhz\" (UniqueName: \"kubernetes.io/projected/89430698-4742-4f29-93c4-ecd964255e62-kube-api-access-wxwhz\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515508 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89430698-4742-4f29-93c4-ecd964255e62-scripts\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515537 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/89430698-4742-4f29-93c4-ecd964255e62-ovn-controller-tls-certs\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5bj\" (UniqueName: \"kubernetes.io/projected/5520b43a-322d-44eb-87d5-b35af1ad70bc-kube-api-access-md5bj\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515649 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89430698-4742-4f29-93c4-ecd964255e62-combined-ca-bundle\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515675 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-etc-ovs\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515713 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-log-ovn\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.515743 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-log\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.517154 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-lib\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.517250 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-run\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.517319 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-run-ovn\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.621482 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5520b43a-322d-44eb-87d5-b35af1ad70bc-scripts\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.621570 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwhz\" (UniqueName: \"kubernetes.io/projected/89430698-4742-4f29-93c4-ecd964255e62-kube-api-access-wxwhz\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.621612 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89430698-4742-4f29-93c4-ecd964255e62-scripts\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.621643 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/89430698-4742-4f29-93c4-ecd964255e62-ovn-controller-tls-certs\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.622955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5bj\" (UniqueName: \"kubernetes.io/projected/5520b43a-322d-44eb-87d5-b35af1ad70bc-kube-api-access-md5bj\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623138 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89430698-4742-4f29-93c4-ecd964255e62-combined-ca-bundle\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623184 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-etc-ovs\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623227 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-log-ovn\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623261 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-log\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623294 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-lib\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623390 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-run\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623496 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-run-ovn\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.623586 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-run\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625054 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-run\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625233 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-lib\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625339 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-run\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625472 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-run-ovn\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-var-log\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625626 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5520b43a-322d-44eb-87d5-b35af1ad70bc-etc-ovs\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.625668 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89430698-4742-4f29-93c4-ecd964255e62-var-log-ovn\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.646086 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89430698-4742-4f29-93c4-ecd964255e62-scripts\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.650008 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5520b43a-322d-44eb-87d5-b35af1ad70bc-scripts\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.655810 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/89430698-4742-4f29-93c4-ecd964255e62-ovn-controller-tls-certs\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.657234 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwhz\" (UniqueName: \"kubernetes.io/projected/89430698-4742-4f29-93c4-ecd964255e62-kube-api-access-wxwhz\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.660302 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5bj\" (UniqueName: \"kubernetes.io/projected/5520b43a-322d-44eb-87d5-b35af1ad70bc-kube-api-access-md5bj\") pod \"ovn-controller-ovs-sbns4\" (UID: \"5520b43a-322d-44eb-87d5-b35af1ad70bc\") " pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.664693 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89430698-4742-4f29-93c4-ecd964255e62-combined-ca-bundle\") pod \"ovn-controller-qm4tj\" (UID: \"89430698-4742-4f29-93c4-ecd964255e62\") " pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.704080 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7db5d7468f-zm4f5"] Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.713594 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.806307 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:03 crc kubenswrapper[4801]: I1124 21:26:03.926198 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g"] Nov 24 21:26:04 crc kubenswrapper[4801]: I1124 21:26:04.550801 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qm4tj"] Nov 24 21:26:04 crc kubenswrapper[4801]: I1124 21:26:04.644136 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerStarted","Data":"6e004288a626364ed7e5756ed186872b82cbf05a89202506f1dc681149b9e8ef"} Nov 24 21:26:04 crc kubenswrapper[4801]: I1124 21:26:04.656016 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" event={"ID":"7deaf86e-7ea9-45ca-9f31-787c92a15400","Type":"ContainerStarted","Data":"da8447a6b82ea55fef0569539d30d339576ea371cc209240e59ad7792891dd4f"} Nov 24 21:26:04 crc kubenswrapper[4801]: I1124 21:26:04.712407 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7db5d7468f-zm4f5" event={"ID":"5c1d851c-1a26-46c1-8895-c27cdfc03881","Type":"ContainerStarted","Data":"b417a644950b05295cb688d1be5cce310b087c3a7ac0cfd78c3bbd40275b8061"} Nov 24 21:26:05 crc kubenswrapper[4801]: I1124 21:26:05.722821 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qm4tj" event={"ID":"89430698-4742-4f29-93c4-ecd964255e62","Type":"ContainerStarted","Data":"e41751e0bf21a16565f92ce5dd41a6f94a33f51083bc6d317e386e8029468d9b"} Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.324787 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sbns4"] Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.404634 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2xlzh"] Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.406559 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.428058 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.429592 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.433864 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2xlzh"] Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.491323 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f99lx\" (UniqueName: \"kubernetes.io/projected/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-kube-api-access-f99lx\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.491397 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.491468 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-config\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.491543 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-combined-ca-bundle\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.491662 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-ovs-rundir\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.491708 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-ovn-rundir\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.594805 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f99lx\" (UniqueName: \"kubernetes.io/projected/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-kube-api-access-f99lx\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.594863 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.595484 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-config\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.596027 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-combined-ca-bundle\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.596731 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-ovs-rundir\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.597018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-ovn-rundir\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.597835 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-ovs-rundir\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.597905 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-ovn-rundir\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.599687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-config\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.609474 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-combined-ca-bundle\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.624102 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.627182 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f99lx\" (UniqueName: \"kubernetes.io/projected/b760e8c8-3610-49b7-bfb0-0f8c20b7a042-kube-api-access-f99lx\") pod \"ovn-controller-metrics-2xlzh\" (UID: \"b760e8c8-3610-49b7-bfb0-0f8c20b7a042\") " pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.738201 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2xlzh" Nov 24 21:26:06 crc kubenswrapper[4801]: I1124 21:26:06.753220 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7db5d7468f-zm4f5" event={"ID":"5c1d851c-1a26-46c1-8895-c27cdfc03881","Type":"ContainerStarted","Data":"cf593256eb99bd194809a59bb256fe2a3da347bac44a4d812d3df9b4aeb176da"} Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.282634 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7db5d7468f-zm4f5" podStartSLOduration=7.282612675 podStartE2EDuration="7.282612675s" podCreationTimestamp="2025-11-24 21:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:26:06.780672601 +0000 UTC m=+1138.863259281" watchObservedRunningTime="2025-11-24 21:26:08.282612675 +0000 UTC m=+1140.365199345" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.292019 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.296251 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.300108 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.300161 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h2sxm" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.301292 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.303046 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.324354 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.468854 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bff49\" (UniqueName: \"kubernetes.io/projected/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-kube-api-access-bff49\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.469045 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.469132 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.469160 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.469317 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.470655 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-config\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.470738 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.470886 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.522658 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.530215 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.534019 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.534187 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.534192 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.535481 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-52lqx" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.548258 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572701 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bff49\" (UniqueName: \"kubernetes.io/projected/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-kube-api-access-bff49\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572830 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572861 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572932 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572963 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-config\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.572989 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.573040 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.573918 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.578240 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.580521 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.580808 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.581334 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.583627 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.584291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.587737 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.587845 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.589653 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-config\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.592721 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bff49\" (UniqueName: \"kubernetes.io/projected/5ff19f32-8c6d-4785-ac60-ce3d2c7939ad-kube-api-access-bff49\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.625349 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.640509 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h2sxm" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.648484 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679345 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679513 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679643 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-config\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679772 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grfbt\" (UniqueName: \"kubernetes.io/projected/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-kube-api-access-grfbt\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.679869 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.680009 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786643 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786728 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786809 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786838 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-config\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786868 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grfbt\" (UniqueName: \"kubernetes.io/projected/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-kube-api-access-grfbt\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.786960 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.788310 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.788630 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.792735 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.795109 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.797113 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.798984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.799058 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.800129 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.801069 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-config\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.824207 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grfbt\" (UniqueName: \"kubernetes.io/projected/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-kube-api-access-grfbt\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.835377 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.852103 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1d77f1-fc9a-46ea-8fd8-629f36a3b659-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.897790 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-52lqx" Nov 24 21:26:08 crc kubenswrapper[4801]: I1124 21:26:08.905188 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:12 crc kubenswrapper[4801]: I1124 21:26:12.323662 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:12 crc kubenswrapper[4801]: I1124 21:26:12.324644 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:12 crc kubenswrapper[4801]: I1124 21:26:12.330827 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:12 crc kubenswrapper[4801]: I1124 21:26:12.849186 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7db5d7468f-zm4f5" Nov 24 21:26:12 crc kubenswrapper[4801]: I1124 21:26:12.943649 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6675df4db6-8rtvb"] Nov 24 21:26:15 crc kubenswrapper[4801]: W1124 21:26:15.730879 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5520b43a_322d_44eb_87d5_b35af1ad70bc.slice/crio-1a0521b06cd39beadea78702ca0dd6dae62d63f977d13e07ca6b41dbc58ad12f WatchSource:0}: Error finding container 1a0521b06cd39beadea78702ca0dd6dae62d63f977d13e07ca6b41dbc58ad12f: Status 404 returned error can't find the container with id 1a0521b06cd39beadea78702ca0dd6dae62d63f977d13e07ca6b41dbc58ad12f Nov 24 21:26:15 crc kubenswrapper[4801]: I1124 21:26:15.879265 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sbns4" event={"ID":"5520b43a-322d-44eb-87d5-b35af1ad70bc","Type":"ContainerStarted","Data":"1a0521b06cd39beadea78702ca0dd6dae62d63f977d13e07ca6b41dbc58ad12f"} Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.047694 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.048762 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5slzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(c6e61e62-b039-4898-b4fa-f20160b67641): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.050183 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="c6e61e62-b039-4898-b4fa-f20160b67641" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.787249 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.787487 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5d9h5f8hdch68fh8fh67fh584h96h8bh7h585hbdh8ch6ch5c9h658h55h565h4h57bh559h675h556h4hc9h588h58chd4h675h58chf4h65q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(9a25adc5-f2a3-44b0-aeb6-8a45707600fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.788702 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="9a25adc5-f2a3-44b0-aeb6-8a45707600fa" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.835021 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.835242 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnpd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(bd963d5f-9d48-4924-a44c-d3a97a3e6461): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.836790 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.958856 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="c6e61e62-b039-4898-b4fa-f20160b67641" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.959486 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" Nov 24 21:26:21 crc kubenswrapper[4801]: E1124 21:26:21.959666 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="9a25adc5-f2a3-44b0-aeb6-8a45707600fa" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.272507 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.273207 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cxrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(052262eb-3362-4169-a9e2-96e364d20be8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.275520 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="052262eb-3362-4169-a9e2-96e364d20be8" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.284985 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.285242 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bbmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(af143054-b9a1-432a-a0f8-9f489550bd24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.286436 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.352478 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.352683 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn8j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2f56f017-0f5c-4eb2-b3be-44db75365483): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.353770 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.363257 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.363542 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnkpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(fb8472fa-9a35-4787-b38c-0c657881d910): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.366102 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.971511 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.971658 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.971710 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="052262eb-3362-4169-a9e2-96e364d20be8" Nov 24 21:26:23 crc kubenswrapper[4801]: E1124 21:26:23.971747 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" Nov 24 21:26:29 crc kubenswrapper[4801]: E1124 21:26:29.060017 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb" Nov 24 21:26:29 crc kubenswrapper[4801]: E1124 21:26:29.061288 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch2cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-7d5fb4cbfb-xjv9g_openshift-operators(7deaf86e-7ea9-45ca-9f31-787c92a15400): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:26:29 crc kubenswrapper[4801]: E1124 21:26:29.062571 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" podUID="7deaf86e-7ea9-45ca-9f31-787c92a15400" Nov 24 21:26:29 crc kubenswrapper[4801]: E1124 21:26:29.414109 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Nov 24 21:26:29 crc kubenswrapper[4801]: E1124 21:26:29.414357 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qk7vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(fbadfbdd-010e-4ce4-bc42-8871dc88b990): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:26:29 crc kubenswrapper[4801]: E1124 21:26:29.415605 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" Nov 24 21:26:30 crc kubenswrapper[4801]: E1124 21:26:30.041557 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" Nov 24 21:26:30 crc kubenswrapper[4801]: E1124 21:26:30.041617 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb\\\"\"" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" podUID="7deaf86e-7ea9-45ca-9f31-787c92a15400" Nov 24 21:26:31 crc kubenswrapper[4801]: E1124 21:26:31.967050 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:26:31 crc kubenswrapper[4801]: E1124 21:26:31.967763 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdrsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-fstrk_openstack(bc7011d4-ca32-48d0-a29d-a42554930a07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:31 crc kubenswrapper[4801]: E1124 21:26:31.969280 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" podUID="bc7011d4-ca32-48d0-a29d-a42554930a07" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.071334 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" podUID="bc7011d4-ca32-48d0-a29d-a42554930a07" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.116739 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.117016 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vw7w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-v4t7w_openstack(6ebaafa4-545b-4839-8228-7f09acfc53e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.118578 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" podUID="6ebaafa4-545b-4839-8228-7f09acfc53e2" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.123145 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.123433 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nltfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ccvps_openstack(5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.124809 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" podUID="5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.135666 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.135966 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vhl5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pcjww_openstack(351c1559-8490-41f1-a436-5b9ab663da7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:26:32 crc kubenswrapper[4801]: E1124 21:26:32.138528 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" podUID="351c1559-8490-41f1-a436-5b9ab663da7c" Nov 24 21:26:32 crc kubenswrapper[4801]: I1124 21:26:32.642785 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:26:32 crc kubenswrapper[4801]: I1124 21:26:32.731061 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:26:32 crc kubenswrapper[4801]: W1124 21:26:32.800713 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ff19f32_8c6d_4785_ac60_ce3d2c7939ad.slice/crio-65b3e4b8d145c8c064b0463982f1049048ebe909fde46dd4ffb3898be9e54429 WatchSource:0}: Error finding container 65b3e4b8d145c8c064b0463982f1049048ebe909fde46dd4ffb3898be9e54429: Status 404 returned error can't find the container with id 65b3e4b8d145c8c064b0463982f1049048ebe909fde46dd4ffb3898be9e54429 Nov 24 21:26:32 crc kubenswrapper[4801]: W1124 21:26:32.807649 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b1d77f1_fc9a_46ea_8fd8_629f36a3b659.slice/crio-1d954a8bbbcb5aada700daa642d6d4dbf71437388e1c16f8661e1ed028d64e1d WatchSource:0}: Error finding container 1d954a8bbbcb5aada700daa642d6d4dbf71437388e1c16f8661e1ed028d64e1d: Status 404 returned error can't find the container with id 1d954a8bbbcb5aada700daa642d6d4dbf71437388e1c16f8661e1ed028d64e1d Nov 24 21:26:33 crc kubenswrapper[4801]: I1124 21:26:33.083177 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659","Type":"ContainerStarted","Data":"1d954a8bbbcb5aada700daa642d6d4dbf71437388e1c16f8661e1ed028d64e1d"} Nov 24 21:26:33 crc kubenswrapper[4801]: I1124 21:26:33.086071 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad","Type":"ContainerStarted","Data":"65b3e4b8d145c8c064b0463982f1049048ebe909fde46dd4ffb3898be9e54429"} Nov 24 21:26:33 crc kubenswrapper[4801]: E1124 21:26:33.089139 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" podUID="5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b" Nov 24 21:26:33 crc kubenswrapper[4801]: I1124 21:26:33.260642 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2xlzh"] Nov 24 21:26:33 crc kubenswrapper[4801]: I1124 21:26:33.837896 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:26:33 crc kubenswrapper[4801]: I1124 21:26:33.846425 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.007031 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-dns-svc\") pod \"351c1559-8490-41f1-a436-5b9ab663da7c\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.007264 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7w5\" (UniqueName: \"kubernetes.io/projected/6ebaafa4-545b-4839-8228-7f09acfc53e2-kube-api-access-vw7w5\") pod \"6ebaafa4-545b-4839-8228-7f09acfc53e2\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.007672 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhl5w\" (UniqueName: \"kubernetes.io/projected/351c1559-8490-41f1-a436-5b9ab663da7c-kube-api-access-vhl5w\") pod \"351c1559-8490-41f1-a436-5b9ab663da7c\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.007726 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebaafa4-545b-4839-8228-7f09acfc53e2-config\") pod \"6ebaafa4-545b-4839-8228-7f09acfc53e2\" (UID: \"6ebaafa4-545b-4839-8228-7f09acfc53e2\") " Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.007781 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-config\") pod \"351c1559-8490-41f1-a436-5b9ab663da7c\" (UID: \"351c1559-8490-41f1-a436-5b9ab663da7c\") " Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.007953 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "351c1559-8490-41f1-a436-5b9ab663da7c" (UID: "351c1559-8490-41f1-a436-5b9ab663da7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.008397 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebaafa4-545b-4839-8228-7f09acfc53e2-config" (OuterVolumeSpecName: "config") pod "6ebaafa4-545b-4839-8228-7f09acfc53e2" (UID: "6ebaafa4-545b-4839-8228-7f09acfc53e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.008495 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.008609 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-config" (OuterVolumeSpecName: "config") pod "351c1559-8490-41f1-a436-5b9ab663da7c" (UID: "351c1559-8490-41f1-a436-5b9ab663da7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.019680 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebaafa4-545b-4839-8228-7f09acfc53e2-kube-api-access-vw7w5" (OuterVolumeSpecName: "kube-api-access-vw7w5") pod "6ebaafa4-545b-4839-8228-7f09acfc53e2" (UID: "6ebaafa4-545b-4839-8228-7f09acfc53e2"). InnerVolumeSpecName "kube-api-access-vw7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.020152 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351c1559-8490-41f1-a436-5b9ab663da7c-kube-api-access-vhl5w" (OuterVolumeSpecName: "kube-api-access-vhl5w") pod "351c1559-8490-41f1-a436-5b9ab663da7c" (UID: "351c1559-8490-41f1-a436-5b9ab663da7c"). InnerVolumeSpecName "kube-api-access-vhl5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.112174 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhl5w\" (UniqueName: \"kubernetes.io/projected/351c1559-8490-41f1-a436-5b9ab663da7c-kube-api-access-vhl5w\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.112226 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebaafa4-545b-4839-8228-7f09acfc53e2-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.112238 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351c1559-8490-41f1-a436-5b9ab663da7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.112247 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw7w5\" (UniqueName: \"kubernetes.io/projected/6ebaafa4-545b-4839-8228-7f09acfc53e2-kube-api-access-vw7w5\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.117321 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2xlzh" event={"ID":"b760e8c8-3610-49b7-bfb0-0f8c20b7a042","Type":"ContainerStarted","Data":"5a36b5f8fcd427b33834f1438d00f3347c1824ee432a2f2abb93478ed290a44d"} Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.120945 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" event={"ID":"351c1559-8490-41f1-a436-5b9ab663da7c","Type":"ContainerDied","Data":"5162444d2f0b2aa4faeca8b6ffcab0c1168e0d6d1b39d8a0ac0fe364867669eb"} Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.121019 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pcjww" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.127493 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" event={"ID":"6ebaafa4-545b-4839-8228-7f09acfc53e2","Type":"ContainerDied","Data":"cdc38c2416e5a2ca816ebbafd1e8cd8af011426bd22f2f2fb6e5f51fd90282b1"} Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.127605 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v4t7w" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.190500 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pcjww"] Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.203319 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pcjww"] Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.236937 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v4t7w"] Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.249669 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v4t7w"] Nov 24 21:26:34 crc kubenswrapper[4801]: E1124 21:26:34.515458 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 24 21:26:34 crc kubenswrapper[4801]: E1124 21:26:34.515539 4801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 24 21:26:34 crc kubenswrapper[4801]: E1124 21:26:34.515700 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9gz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(41c9700a-0355-4e6d-82d0-934fc45f5d52): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:26:34 crc kubenswrapper[4801]: E1124 21:26:34.517060 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.684913 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351c1559-8490-41f1-a436-5b9ab663da7c" path="/var/lib/kubelet/pods/351c1559-8490-41f1-a436-5b9ab663da7c/volumes" Nov 24 21:26:34 crc kubenswrapper[4801]: I1124 21:26:34.686003 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebaafa4-545b-4839-8228-7f09acfc53e2" path="/var/lib/kubelet/pods/6ebaafa4-545b-4839-8228-7f09acfc53e2/volumes" Nov 24 21:26:35 crc kubenswrapper[4801]: E1124 21:26:35.190082 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.196110 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9a25adc5-f2a3-44b0-aeb6-8a45707600fa","Type":"ContainerStarted","Data":"36f320edbd3353b364089ef8d679eb024e8d8e049ef9ef178ed82b1ebfcffc76"} Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.198316 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.200174 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bd963d5f-9d48-4924-a44c-d3a97a3e6461","Type":"ContainerStarted","Data":"0d2be747b35db1367a6057489c68f5f52478a14f7b1ec3d3d4774a511fbe445c"} Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.207891 4801 generic.go:334] "Generic (PLEG): container finished" podID="5520b43a-322d-44eb-87d5-b35af1ad70bc" containerID="798be8e7a1eed20a00d9bbc2639e7154dc699b7cbdc2ee33b3922808d6941c99" exitCode=0 Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.208664 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sbns4" event={"ID":"5520b43a-322d-44eb-87d5-b35af1ad70bc","Type":"ContainerDied","Data":"798be8e7a1eed20a00d9bbc2639e7154dc699b7cbdc2ee33b3922808d6941c99"} Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.220890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qm4tj" event={"ID":"89430698-4742-4f29-93c4-ecd964255e62","Type":"ContainerStarted","Data":"9533763a67f403b11e3bb7b91611b72b79e5bdf691c7d14ac92385fc5afb65c2"} Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.221668 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qm4tj" Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.223276 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.737187565 podStartE2EDuration="38.223265383s" podCreationTimestamp="2025-11-24 21:25:58 +0000 UTC" firstStartedPulling="2025-11-24 21:25:59.762041428 +0000 UTC m=+1131.844628098" lastFinishedPulling="2025-11-24 21:26:35.248119246 +0000 UTC m=+1167.330705916" observedRunningTime="2025-11-24 21:26:36.221689804 +0000 UTC m=+1168.304276474" watchObservedRunningTime="2025-11-24 21:26:36.223265383 +0000 UTC m=+1168.305852053" Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.227540 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659","Type":"ContainerStarted","Data":"d1d65a18f289ee05edc4aa8a89c439a6f7dc99a07b1b497dc52b4a299c2057d7"} Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.228868 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad","Type":"ContainerStarted","Data":"19f6591868bc1d402bbf3bc1baa2297df1df3e7c73b1f57ab76efda19c86ec60"} Nov 24 21:26:36 crc kubenswrapper[4801]: I1124 21:26:36.299521 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qm4tj" podStartSLOduration=4.508631662 podStartE2EDuration="33.299491352s" podCreationTimestamp="2025-11-24 21:26:03 +0000 UTC" firstStartedPulling="2025-11-24 21:26:04.604462198 +0000 UTC m=+1136.687048868" lastFinishedPulling="2025-11-24 21:26:33.395321888 +0000 UTC m=+1165.477908558" observedRunningTime="2025-11-24 21:26:36.290115388 +0000 UTC m=+1168.372702058" watchObservedRunningTime="2025-11-24 21:26:36.299491352 +0000 UTC m=+1168.382078032" Nov 24 21:26:37 crc kubenswrapper[4801]: I1124 21:26:37.255564 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sbns4" event={"ID":"5520b43a-322d-44eb-87d5-b35af1ad70bc","Type":"ContainerStarted","Data":"22c283b368845b8cb2415325d94e324615963847dd44955cbfd151fb8e1c706e"} Nov 24 21:26:38 crc kubenswrapper[4801]: I1124 21:26:38.004647 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6675df4db6-8rtvb" podUID="960ccabe-478a-4c38-ab6b-6aa9865d05d1" containerName="console" containerID="cri-o://f7705a05d94e40c7a0eea56eb930f6ff22cf1c1bc13f06802ac424dbf73bb5dc" gracePeriod=15 Nov 24 21:26:38 crc kubenswrapper[4801]: I1124 21:26:38.278429 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6675df4db6-8rtvb_960ccabe-478a-4c38-ab6b-6aa9865d05d1/console/0.log" Nov 24 21:26:38 crc kubenswrapper[4801]: I1124 21:26:38.278481 4801 generic.go:334] "Generic (PLEG): container finished" podID="960ccabe-478a-4c38-ab6b-6aa9865d05d1" containerID="f7705a05d94e40c7a0eea56eb930f6ff22cf1c1bc13f06802ac424dbf73bb5dc" exitCode=2 Nov 24 21:26:38 crc kubenswrapper[4801]: I1124 21:26:38.278513 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6675df4db6-8rtvb" event={"ID":"960ccabe-478a-4c38-ab6b-6aa9865d05d1","Type":"ContainerDied","Data":"f7705a05d94e40c7a0eea56eb930f6ff22cf1c1bc13f06802ac424dbf73bb5dc"} Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.293165 4801 generic.go:334] "Generic (PLEG): container finished" podID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" containerID="0d2be747b35db1367a6057489c68f5f52478a14f7b1ec3d3d4774a511fbe445c" exitCode=0 Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.293272 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bd963d5f-9d48-4924-a44c-d3a97a3e6461","Type":"ContainerDied","Data":"0d2be747b35db1367a6057489c68f5f52478a14f7b1ec3d3d4774a511fbe445c"} Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.932745 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6675df4db6-8rtvb_960ccabe-478a-4c38-ab6b-6aa9865d05d1/console/0.log" Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.933870 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.987526 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-trusted-ca-bundle\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.987912 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-serving-cert\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.987980 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-oauth-serving-cert\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.988009 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-config\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.988051 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-service-ca\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.988297 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-oauth-config\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.988330 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lphbh\" (UniqueName: \"kubernetes.io/projected/960ccabe-478a-4c38-ab6b-6aa9865d05d1-kube-api-access-lphbh\") pod \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\" (UID: \"960ccabe-478a-4c38-ab6b-6aa9865d05d1\") " Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.989430 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-service-ca" (OuterVolumeSpecName: "service-ca") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.989491 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-config" (OuterVolumeSpecName: "console-config") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.989820 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:39 crc kubenswrapper[4801]: I1124 21:26:39.993476 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.007187 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.007802 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.007864 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.007883 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.007897 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.007910 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960ccabe-478a-4c38-ab6b-6aa9865d05d1-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.022668 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960ccabe-478a-4c38-ab6b-6aa9865d05d1-kube-api-access-lphbh" (OuterVolumeSpecName: "kube-api-access-lphbh") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "kube-api-access-lphbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.037475 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "960ccabe-478a-4c38-ab6b-6aa9865d05d1" (UID: "960ccabe-478a-4c38-ab6b-6aa9865d05d1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.110128 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960ccabe-478a-4c38-ab6b-6aa9865d05d1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.110154 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lphbh\" (UniqueName: \"kubernetes.io/projected/960ccabe-478a-4c38-ab6b-6aa9865d05d1-kube-api-access-lphbh\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.329717 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6675df4db6-8rtvb_960ccabe-478a-4c38-ab6b-6aa9865d05d1/console/0.log" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.329781 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6675df4db6-8rtvb" event={"ID":"960ccabe-478a-4c38-ab6b-6aa9865d05d1","Type":"ContainerDied","Data":"4f4200433445ad46e54e4f46e408dc805e97ad58763d699084605b6b4a2ae6be"} Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.329831 4801 scope.go:117] "RemoveContainer" containerID="f7705a05d94e40c7a0eea56eb930f6ff22cf1c1bc13f06802ac424dbf73bb5dc" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.329982 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6675df4db6-8rtvb" Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.375921 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6675df4db6-8rtvb"] Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.383464 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6675df4db6-8rtvb"] Nov 24 21:26:40 crc kubenswrapper[4801]: I1124 21:26:40.678093 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960ccabe-478a-4c38-ab6b-6aa9865d05d1" path="/var/lib/kubelet/pods/960ccabe-478a-4c38-ab6b-6aa9865d05d1/volumes" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.341694 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6e61e62-b039-4898-b4fa-f20160b67641","Type":"ContainerStarted","Data":"eeefae05764b097afa4c13f448e0851af6de93c6ba587a777f831f45eaebb6fd"} Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.346644 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2b1d77f1-fc9a-46ea-8fd8-629f36a3b659","Type":"ContainerStarted","Data":"2fd569bc14ada280fcdd25021df78df02e6d22199e29630b50f11c20af2d513c"} Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.348844 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2xlzh" event={"ID":"b760e8c8-3610-49b7-bfb0-0f8c20b7a042","Type":"ContainerStarted","Data":"a4ed07e968b64f6c35295457527c054ffb774752a4d1c316407d0b35ffb88360"} Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.351547 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5ff19f32-8c6d-4785-ac60-ce3d2c7939ad","Type":"ContainerStarted","Data":"7e91fda4b8c9b8e578402a45320674b3df048ed970e8382f24cb7dc17a7035ae"} Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.354820 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bd963d5f-9d48-4924-a44c-d3a97a3e6461","Type":"ContainerStarted","Data":"f521d6a66ae959e66182d9475b7df13d3449cad3f0c4a38b754454c08897ba0f"} Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.358084 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sbns4" event={"ID":"5520b43a-322d-44eb-87d5-b35af1ad70bc","Type":"ContainerStarted","Data":"f96a595aca7c7ec2d65db71b07425b2fb7f630b36a00abd4600327630f842218"} Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.358259 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.358524 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.445495 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sbns4" podStartSLOduration=20.905320063 podStartE2EDuration="38.445470635s" podCreationTimestamp="2025-11-24 21:26:03 +0000 UTC" firstStartedPulling="2025-11-24 21:26:15.734701441 +0000 UTC m=+1147.817288111" lastFinishedPulling="2025-11-24 21:26:33.274852013 +0000 UTC m=+1165.357438683" observedRunningTime="2025-11-24 21:26:41.389342686 +0000 UTC m=+1173.471929356" watchObservedRunningTime="2025-11-24 21:26:41.445470635 +0000 UTC m=+1173.528057305" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.452117 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.166377309 podStartE2EDuration="34.452096832s" podCreationTimestamp="2025-11-24 21:26:07 +0000 UTC" firstStartedPulling="2025-11-24 21:26:32.81011396 +0000 UTC m=+1164.892700670" lastFinishedPulling="2025-11-24 21:26:40.095833513 +0000 UTC m=+1172.178420193" observedRunningTime="2025-11-24 21:26:41.411237032 +0000 UTC m=+1173.493823702" watchObservedRunningTime="2025-11-24 21:26:41.452096832 +0000 UTC m=+1173.534683502" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.468509 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.594750518 podStartE2EDuration="45.468483955s" podCreationTimestamp="2025-11-24 21:25:56 +0000 UTC" firstStartedPulling="2025-11-24 21:25:59.352507944 +0000 UTC m=+1131.435094614" lastFinishedPulling="2025-11-24 21:26:35.226241391 +0000 UTC m=+1167.308828051" observedRunningTime="2025-11-24 21:26:41.441550671 +0000 UTC m=+1173.524137351" watchObservedRunningTime="2025-11-24 21:26:41.468483955 +0000 UTC m=+1173.551070635" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.476518 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2xlzh" podStartSLOduration=28.892804603 podStartE2EDuration="35.476494157s" podCreationTimestamp="2025-11-24 21:26:06 +0000 UTC" firstStartedPulling="2025-11-24 21:26:33.401414459 +0000 UTC m=+1165.484001129" lastFinishedPulling="2025-11-24 21:26:39.985104013 +0000 UTC m=+1172.067690683" observedRunningTime="2025-11-24 21:26:41.463500059 +0000 UTC m=+1173.546086739" watchObservedRunningTime="2025-11-24 21:26:41.476494157 +0000 UTC m=+1173.559080827" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.496075 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.163949774 podStartE2EDuration="34.496056s" podCreationTimestamp="2025-11-24 21:26:07 +0000 UTC" firstStartedPulling="2025-11-24 21:26:32.814705374 +0000 UTC m=+1164.897292054" lastFinishedPulling="2025-11-24 21:26:40.14681161 +0000 UTC m=+1172.229398280" observedRunningTime="2025-11-24 21:26:41.492822408 +0000 UTC m=+1173.575409078" watchObservedRunningTime="2025-11-24 21:26:41.496056 +0000 UTC m=+1173.578642670" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.668636 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.741146 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.870205 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ccvps"] Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.907130 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.943577 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-89cng"] Nov 24 21:26:41 crc kubenswrapper[4801]: E1124 21:26:41.944204 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960ccabe-478a-4c38-ab6b-6aa9865d05d1" containerName="console" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.944224 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="960ccabe-478a-4c38-ab6b-6aa9865d05d1" containerName="console" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.944466 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="960ccabe-478a-4c38-ab6b-6aa9865d05d1" containerName="console" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.945868 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.948231 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 21:26:41 crc kubenswrapper[4801]: I1124 21:26:41.961773 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-89cng"] Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.063067 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-config\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.063130 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.063186 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45t4\" (UniqueName: \"kubernetes.io/projected/309c307d-d70f-45fb-be0d-62c446c3fecf-kube-api-access-z45t4\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.063220 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.166419 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.166568 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-config\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.166598 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.166642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45t4\" (UniqueName: \"kubernetes.io/projected/309c307d-d70f-45fb-be0d-62c446c3fecf-kube-api-access-z45t4\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.168004 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.168704 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-config\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.183202 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.323262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45t4\" (UniqueName: \"kubernetes.io/projected/309c307d-d70f-45fb-be0d-62c446c3fecf-kube-api-access-z45t4\") pod \"dnsmasq-dns-7fd796d7df-89cng\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.364431 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fstrk"] Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.392158 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.450316 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2m66p"] Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.452708 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.459714 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.484536 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.484586 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqtcz\" (UniqueName: \"kubernetes.io/projected/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-kube-api-access-mqtcz\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.484612 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.484641 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-config\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.484767 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.485313 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.507105 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2m66p"] Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.585920 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nltfb\" (UniqueName: \"kubernetes.io/projected/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-kube-api-access-nltfb\") pod \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.586117 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-dns-svc\") pod \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.586311 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-config\") pod \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\" (UID: \"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b\") " Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.587280 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.587312 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqtcz\" (UniqueName: \"kubernetes.io/projected/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-kube-api-access-mqtcz\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.587333 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.587389 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-config\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.587610 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.597997 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.598824 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-config\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.599543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.599846 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.603067 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.604199 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-config" (OuterVolumeSpecName: "config") pod "5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b" (UID: "5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.606637 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b" (UID: "5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.618659 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-kube-api-access-nltfb" (OuterVolumeSpecName: "kube-api-access-nltfb") pod "5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b" (UID: "5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b"). InnerVolumeSpecName "kube-api-access-nltfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.652293 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.662869 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.665583 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqtcz\" (UniqueName: \"kubernetes.io/projected/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-kube-api-access-mqtcz\") pod \"dnsmasq-dns-86db49b7ff-2m66p\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.690178 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nltfb\" (UniqueName: \"kubernetes.io/projected/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-kube-api-access-nltfb\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.690226 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.690238 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:42 crc kubenswrapper[4801]: I1124 21:26:42.835251 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.016936 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.104575 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdrsz\" (UniqueName: \"kubernetes.io/projected/bc7011d4-ca32-48d0-a29d-a42554930a07-kube-api-access-fdrsz\") pod \"bc7011d4-ca32-48d0-a29d-a42554930a07\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.105379 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-config\") pod \"bc7011d4-ca32-48d0-a29d-a42554930a07\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.105476 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-dns-svc\") pod \"bc7011d4-ca32-48d0-a29d-a42554930a07\" (UID: \"bc7011d4-ca32-48d0-a29d-a42554930a07\") " Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.111442 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-config" (OuterVolumeSpecName: "config") pod "bc7011d4-ca32-48d0-a29d-a42554930a07" (UID: "bc7011d4-ca32-48d0-a29d-a42554930a07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.117742 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc7011d4-ca32-48d0-a29d-a42554930a07" (UID: "bc7011d4-ca32-48d0-a29d-a42554930a07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.132688 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7011d4-ca32-48d0-a29d-a42554930a07-kube-api-access-fdrsz" (OuterVolumeSpecName: "kube-api-access-fdrsz") pod "bc7011d4-ca32-48d0-a29d-a42554930a07" (UID: "bc7011d4-ca32-48d0-a29d-a42554930a07"). InnerVolumeSpecName "kube-api-access-fdrsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.214793 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.214839 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc7011d4-ca32-48d0-a29d-a42554930a07-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.214852 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdrsz\" (UniqueName: \"kubernetes.io/projected/bc7011d4-ca32-48d0-a29d-a42554930a07-kube-api-access-fdrsz\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.400865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"af143054-b9a1-432a-a0f8-9f489550bd24","Type":"ContainerStarted","Data":"b56a953951e0148a3ed7af819ebbd4de7c7816d9238af1bd0bd53a4ca0afb848"} Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.409849 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56f017-0f5c-4eb2-b3be-44db75365483","Type":"ContainerStarted","Data":"a857a39ae5ed62c45ea72764d9ae7bed043f8de581aa188be2bce5c73eed473f"} Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.412156 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" event={"ID":"5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b","Type":"ContainerDied","Data":"2544723fa90fe2f84df6a94ee22376894a0565b79afbadbcbf4e7b68cf6908ac"} Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.412262 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ccvps" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.416463 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"052262eb-3362-4169-a9e2-96e364d20be8","Type":"ContainerStarted","Data":"dcfc6398313ef14cefed31e1a783d37f942b5f8c01dcd5371474a4e544086dd1"} Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.418375 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"fb8472fa-9a35-4787-b38c-0c657881d910","Type":"ContainerStarted","Data":"d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18"} Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.421315 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" event={"ID":"bc7011d4-ca32-48d0-a29d-a42554930a07","Type":"ContainerDied","Data":"e7c7756239e38b815165c15fffaa506b83be45f54032358e11477cf02cd598b0"} Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.421498 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fstrk" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.423031 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.479492 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.492435 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ccvps"] Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.515502 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ccvps"] Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.640867 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fstrk"] Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.650356 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fstrk"] Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.789254 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.794081 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.799218 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-445cf" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.799242 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.799492 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.799541 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.808818 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.900532 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936354 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a2dce8d-7a93-4896-b362-5af3371f1916-scripts\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936437 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2dce8d-7a93-4896-b362-5af3371f1916-config\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936493 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936519 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-779lp\" (UniqueName: \"kubernetes.io/projected/3a2dce8d-7a93-4896-b362-5af3371f1916-kube-api-access-779lp\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936538 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a2dce8d-7a93-4896-b362-5af3371f1916-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:43 crc kubenswrapper[4801]: I1124 21:26:43.936586 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038685 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-779lp\" (UniqueName: \"kubernetes.io/projected/3a2dce8d-7a93-4896-b362-5af3371f1916-kube-api-access-779lp\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038766 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a2dce8d-7a93-4896-b362-5af3371f1916-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038787 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038826 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038951 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a2dce8d-7a93-4896-b362-5af3371f1916-scripts\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.038988 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2dce8d-7a93-4896-b362-5af3371f1916-config\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.039897 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2dce8d-7a93-4896-b362-5af3371f1916-config\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.041885 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a2dce8d-7a93-4896-b362-5af3371f1916-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.042149 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a2dce8d-7a93-4896-b362-5af3371f1916-scripts\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.051893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.054311 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.059121 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2dce8d-7a93-4896-b362-5af3371f1916-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.069068 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-779lp\" (UniqueName: \"kubernetes.io/projected/3a2dce8d-7a93-4896-b362-5af3371f1916-kube-api-access-779lp\") pod \"ovn-northd-0\" (UID: \"3a2dce8d-7a93-4896-b362-5af3371f1916\") " pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.116164 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2m66p"] Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.147123 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.155835 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-89cng"] Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.434489 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" event={"ID":"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85","Type":"ContainerStarted","Data":"3e242d632d39917f704e1a4bebb677979981678eda3ecbcd8f3edc761571c50d"} Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.436142 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" event={"ID":"309c307d-d70f-45fb-be0d-62c446c3fecf","Type":"ContainerStarted","Data":"16bd6363b6977c81e0df89552e4dda590e8327d05d1f36dc1731ef22b62c6a8c"} Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.697743 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b" path="/var/lib/kubelet/pods/5b5ea9d5-f483-4b62-a4fe-fdb7f0f2dc6b/volumes" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.700143 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7011d4-ca32-48d0-a29d-a42554930a07" path="/var/lib/kubelet/pods/bc7011d4-ca32-48d0-a29d-a42554930a07/volumes" Nov 24 21:26:44 crc kubenswrapper[4801]: I1124 21:26:44.715740 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:26:45 crc kubenswrapper[4801]: I1124 21:26:45.446954 4801 generic.go:334] "Generic (PLEG): container finished" podID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerID="57c192115cd1dba821a8f14f7c32cd733fc0682bfad521459bccbfad21d8cd1e" exitCode=0 Nov 24 21:26:45 crc kubenswrapper[4801]: I1124 21:26:45.447027 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" event={"ID":"309c307d-d70f-45fb-be0d-62c446c3fecf","Type":"ContainerDied","Data":"57c192115cd1dba821a8f14f7c32cd733fc0682bfad521459bccbfad21d8cd1e"} Nov 24 21:26:45 crc kubenswrapper[4801]: I1124 21:26:45.449451 4801 generic.go:334] "Generic (PLEG): container finished" podID="c6e61e62-b039-4898-b4fa-f20160b67641" containerID="eeefae05764b097afa4c13f448e0851af6de93c6ba587a777f831f45eaebb6fd" exitCode=0 Nov 24 21:26:45 crc kubenswrapper[4801]: I1124 21:26:45.449535 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6e61e62-b039-4898-b4fa-f20160b67641","Type":"ContainerDied","Data":"eeefae05764b097afa4c13f448e0851af6de93c6ba587a777f831f45eaebb6fd"} Nov 24 21:26:45 crc kubenswrapper[4801]: I1124 21:26:45.451567 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3a2dce8d-7a93-4896-b362-5af3371f1916","Type":"ContainerStarted","Data":"cef2498afec8db2fc7acfd5ea95228db7a99da3dbbaba259cca018965f4bd041"} Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.464452 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerStarted","Data":"00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266"} Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.469574 4801 generic.go:334] "Generic (PLEG): container finished" podID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerID="f917db9e2d3873179c9db9c514b73f792b2af239e76c7532df3d37f604ffb182" exitCode=0 Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.469710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" event={"ID":"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85","Type":"ContainerDied","Data":"f917db9e2d3873179c9db9c514b73f792b2af239e76c7532df3d37f604ffb182"} Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.475058 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6e61e62-b039-4898-b4fa-f20160b67641","Type":"ContainerStarted","Data":"ee58f2ee9fbefc15fb3a58d27ddd6faa24a6300398d5229aa8a772ce94a7b46c"} Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.525328 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371985.329477 podStartE2EDuration="51.525299324s" podCreationTimestamp="2025-11-24 21:25:55 +0000 UTC" firstStartedPulling="2025-11-24 21:25:57.802783163 +0000 UTC m=+1129.885369823" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:26:46.520620948 +0000 UTC m=+1178.603207658" watchObservedRunningTime="2025-11-24 21:26:46.525299324 +0000 UTC m=+1178.607886004" Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.897082 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 21:26:46 crc kubenswrapper[4801]: I1124 21:26:46.897180 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.500144 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" event={"ID":"7deaf86e-7ea9-45ca-9f31-787c92a15400","Type":"ContainerStarted","Data":"11f4cb411b18562361e55c0a51dfe7ebf4439f2c6ad0f1638e62c32608dc57c1"} Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.503262 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3a2dce8d-7a93-4896-b362-5af3371f1916","Type":"ContainerStarted","Data":"9ae9547120862b778eebb0f8d1d448e38351db1f36b38211136386ce3259f308"} Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.506451 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" event={"ID":"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85","Type":"ContainerStarted","Data":"b244eddb4214977cf5f9c16891d15d88c115ab3a91a40b167a04e1d9bbaa6a29"} Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.506944 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.508836 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" event={"ID":"309c307d-d70f-45fb-be0d-62c446c3fecf","Type":"ContainerStarted","Data":"6b5940a3c41bcf32d2e48fd807103bc2535ad074edb4d3e6d09ccaba9ec0d215"} Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.509147 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.551820 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-xjv9g" podStartSLOduration=3.619453378 podStartE2EDuration="46.548107045s" podCreationTimestamp="2025-11-24 21:26:01 +0000 UTC" firstStartedPulling="2025-11-24 21:26:04.122921218 +0000 UTC m=+1136.205507888" lastFinishedPulling="2025-11-24 21:26:47.051574865 +0000 UTC m=+1179.134161555" observedRunningTime="2025-11-24 21:26:47.534686294 +0000 UTC m=+1179.617272964" watchObservedRunningTime="2025-11-24 21:26:47.548107045 +0000 UTC m=+1179.630693715" Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.562033 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" podStartSLOduration=6.142137242 podStartE2EDuration="6.5620103s" podCreationTimestamp="2025-11-24 21:26:41 +0000 UTC" firstStartedPulling="2025-11-24 21:26:44.186569288 +0000 UTC m=+1176.269155958" lastFinishedPulling="2025-11-24 21:26:44.606442346 +0000 UTC m=+1176.689029016" observedRunningTime="2025-11-24 21:26:47.561758032 +0000 UTC m=+1179.644344702" watchObservedRunningTime="2025-11-24 21:26:47.5620103 +0000 UTC m=+1179.644596970" Nov 24 21:26:47 crc kubenswrapper[4801]: I1124 21:26:47.615288 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" podStartSLOduration=5.024244669 podStartE2EDuration="5.615267849s" podCreationTimestamp="2025-11-24 21:26:42 +0000 UTC" firstStartedPulling="2025-11-24 21:26:44.125416462 +0000 UTC m=+1176.208003132" lastFinishedPulling="2025-11-24 21:26:44.716439642 +0000 UTC m=+1176.799026312" observedRunningTime="2025-11-24 21:26:47.61274602 +0000 UTC m=+1179.695332680" watchObservedRunningTime="2025-11-24 21:26:47.615267849 +0000 UTC m=+1179.697854519" Nov 24 21:26:48 crc kubenswrapper[4801]: I1124 21:26:48.551974 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3a2dce8d-7a93-4896-b362-5af3371f1916","Type":"ContainerStarted","Data":"234d22e7a2e6f038a438ca6e2040552b965201ae2de94f28200ea8f9e354ffea"} Nov 24 21:26:48 crc kubenswrapper[4801]: I1124 21:26:48.565432 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 21:26:48 crc kubenswrapper[4801]: I1124 21:26:48.565485 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 21:26:48 crc kubenswrapper[4801]: I1124 21:26:48.589906 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.264328917 podStartE2EDuration="5.5898787s" podCreationTimestamp="2025-11-24 21:26:43 +0000 UTC" firstStartedPulling="2025-11-24 21:26:44.720738747 +0000 UTC m=+1176.803325417" lastFinishedPulling="2025-11-24 21:26:47.04628852 +0000 UTC m=+1179.128875200" observedRunningTime="2025-11-24 21:26:48.582323092 +0000 UTC m=+1180.664909802" watchObservedRunningTime="2025-11-24 21:26:48.5898787 +0000 UTC m=+1180.672465370" Nov 24 21:26:48 crc kubenswrapper[4801]: I1124 21:26:48.834215 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 21:26:49 crc kubenswrapper[4801]: I1124 21:26:49.148731 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 21:26:49 crc kubenswrapper[4801]: I1124 21:26:49.564009 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41c9700a-0355-4e6d-82d0-934fc45f5d52","Type":"ContainerStarted","Data":"1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367"} Nov 24 21:26:49 crc kubenswrapper[4801]: I1124 21:26:49.565245 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:26:49 crc kubenswrapper[4801]: I1124 21:26:49.592434 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.763696767 podStartE2EDuration="49.592407575s" podCreationTimestamp="2025-11-24 21:26:00 +0000 UTC" firstStartedPulling="2025-11-24 21:26:02.23535197 +0000 UTC m=+1134.317938630" lastFinishedPulling="2025-11-24 21:26:49.064062768 +0000 UTC m=+1181.146649438" observedRunningTime="2025-11-24 21:26:49.590388691 +0000 UTC m=+1181.672975391" watchObservedRunningTime="2025-11-24 21:26:49.592407575 +0000 UTC m=+1181.674994245" Nov 24 21:26:49 crc kubenswrapper[4801]: I1124 21:26:49.667134 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.216870 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-89cng"] Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.217634 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerName="dnsmasq-dns" containerID="cri-o://6b5940a3c41bcf32d2e48fd807103bc2535ad074edb4d3e6d09ccaba9ec0d215" gracePeriod=10 Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.321672 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-ntxvj"] Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.324165 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.376258 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ntxvj"] Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.452524 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfd9\" (UniqueName: \"kubernetes.io/projected/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-kube-api-access-8cfd9\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.452648 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.452678 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-config\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.452776 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.452812 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-dns-svc\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.555464 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfd9\" (UniqueName: \"kubernetes.io/projected/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-kube-api-access-8cfd9\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.555572 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.555601 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-config\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.556591 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.556640 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-config\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.557485 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.558163 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.557521 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-dns-svc\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.558463 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-dns-svc\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.589568 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfd9\" (UniqueName: \"kubernetes.io/projected/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-kube-api-access-8cfd9\") pod \"dnsmasq-dns-698758b865-ntxvj\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.591690 4801 generic.go:334] "Generic (PLEG): container finished" podID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerID="6b5940a3c41bcf32d2e48fd807103bc2535ad074edb4d3e6d09ccaba9ec0d215" exitCode=0 Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.591736 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" event={"ID":"309c307d-d70f-45fb-be0d-62c446c3fecf","Type":"ContainerDied","Data":"6b5940a3c41bcf32d2e48fd807103bc2535ad074edb4d3e6d09ccaba9ec0d215"} Nov 24 21:26:51 crc kubenswrapper[4801]: I1124 21:26:51.700464 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.261280 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ntxvj"] Nov 24 21:26:52 crc kubenswrapper[4801]: W1124 21:26:52.272708 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac7b18f4_5d92_491c_a4f9_de94a69c61f1.slice/crio-2690973a3544b210914f7d4e8ad1b8a9bf2cf194ee87f630fe970fcd2b5eba35 WatchSource:0}: Error finding container 2690973a3544b210914f7d4e8ad1b8a9bf2cf194ee87f630fe970fcd2b5eba35: Status 404 returned error can't find the container with id 2690973a3544b210914f7d4e8ad1b8a9bf2cf194ee87f630fe970fcd2b5eba35 Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.451315 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.474764 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:26:52 crc kubenswrapper[4801]: E1124 21:26:52.475333 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerName="init" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.475355 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerName="init" Nov 24 21:26:52 crc kubenswrapper[4801]: E1124 21:26:52.475409 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerName="dnsmasq-dns" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.475417 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerName="dnsmasq-dns" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.475665 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" containerName="dnsmasq-dns" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.481580 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z45t4\" (UniqueName: \"kubernetes.io/projected/309c307d-d70f-45fb-be0d-62c446c3fecf-kube-api-access-z45t4\") pod \"309c307d-d70f-45fb-be0d-62c446c3fecf\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.481637 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-ovsdbserver-nb\") pod \"309c307d-d70f-45fb-be0d-62c446c3fecf\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.481702 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-dns-svc\") pod \"309c307d-d70f-45fb-be0d-62c446c3fecf\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.481740 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-config\") pod \"309c307d-d70f-45fb-be0d-62c446c3fecf\" (UID: \"309c307d-d70f-45fb-be0d-62c446c3fecf\") " Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.491581 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.502782 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309c307d-d70f-45fb-be0d-62c446c3fecf-kube-api-access-z45t4" (OuterVolumeSpecName: "kube-api-access-z45t4") pod "309c307d-d70f-45fb-be0d-62c446c3fecf" (UID: "309c307d-d70f-45fb-be0d-62c446c3fecf"). InnerVolumeSpecName "kube-api-access-z45t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.503225 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.503338 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.503482 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rdqdj" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.504724 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.512314 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.584104 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "309c307d-d70f-45fb-be0d-62c446c3fecf" (UID: "309c307d-d70f-45fb-be0d-62c446c3fecf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.584218 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z45t4\" (UniqueName: \"kubernetes.io/projected/309c307d-d70f-45fb-be0d-62c446c3fecf-kube-api-access-z45t4\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.615085 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ntxvj" event={"ID":"ac7b18f4-5d92-491c-a4f9-de94a69c61f1","Type":"ContainerStarted","Data":"2690973a3544b210914f7d4e8ad1b8a9bf2cf194ee87f630fe970fcd2b5eba35"} Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.619209 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" event={"ID":"309c307d-d70f-45fb-be0d-62c446c3fecf","Type":"ContainerDied","Data":"16bd6363b6977c81e0df89552e4dda590e8327d05d1f36dc1731ef22b62c6a8c"} Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.619252 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-89cng" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.619287 4801 scope.go:117] "RemoveContainer" containerID="6b5940a3c41bcf32d2e48fd807103bc2535ad074edb4d3e6d09ccaba9ec0d215" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.625782 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-config" (OuterVolumeSpecName: "config") pod "309c307d-d70f-45fb-be0d-62c446c3fecf" (UID: "309c307d-d70f-45fb-be0d-62c446c3fecf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.666381 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "309c307d-d70f-45fb-be0d-62c446c3fecf" (UID: "309c307d-d70f-45fb-be0d-62c446c3fecf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.686895 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687024 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd9d3-072d-4cc1-b164-9701cb421564-cache\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687116 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mrj\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-kube-api-access-99mrj\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687186 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd9d3-072d-4cc1-b164-9701cb421564-lock\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687244 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687262 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.687275 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309c307d-d70f-45fb-be0d-62c446c3fecf-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.763449 4801 scope.go:117] "RemoveContainer" containerID="57c192115cd1dba821a8f14f7c32cd733fc0682bfad521459bccbfad21d8cd1e" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.789146 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mrj\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-kube-api-access-99mrj\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.789294 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd9d3-072d-4cc1-b164-9701cb421564-lock\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.789329 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.789444 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd9d3-072d-4cc1-b164-9701cb421564-cache\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.789526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: E1124 21:26:52.789705 4801 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:26:52 crc kubenswrapper[4801]: E1124 21:26:52.789724 4801 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:26:52 crc kubenswrapper[4801]: E1124 21:26:52.789777 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift podName:ba1dd9d3-072d-4cc1-b164-9701cb421564 nodeName:}" failed. No retries permitted until 2025-11-24 21:26:53.289758805 +0000 UTC m=+1185.372345475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift") pod "swift-storage-0" (UID: "ba1dd9d3-072d-4cc1-b164-9701cb421564") : configmap "swift-ring-files" not found Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.791333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd9d3-072d-4cc1-b164-9701cb421564-lock\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.791849 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.793848 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd9d3-072d-4cc1-b164-9701cb421564-cache\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.832581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mrj\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-kube-api-access-99mrj\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.833880 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.838723 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.962053 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-89cng"] Nov 24 21:26:52 crc kubenswrapper[4801]: I1124 21:26:52.972534 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-89cng"] Nov 24 21:26:53 crc kubenswrapper[4801]: I1124 21:26:53.031465 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 21:26:53 crc kubenswrapper[4801]: I1124 21:26:53.129898 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 21:26:53 crc kubenswrapper[4801]: I1124 21:26:53.313695 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:53 crc kubenswrapper[4801]: E1124 21:26:53.313972 4801 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:26:53 crc kubenswrapper[4801]: E1124 21:26:53.314016 4801 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:26:53 crc kubenswrapper[4801]: E1124 21:26:53.314094 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift podName:ba1dd9d3-072d-4cc1-b164-9701cb421564 nodeName:}" failed. No retries permitted until 2025-11-24 21:26:54.314068464 +0000 UTC m=+1186.396655134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift") pod "swift-storage-0" (UID: "ba1dd9d3-072d-4cc1-b164-9701cb421564") : configmap "swift-ring-files" not found Nov 24 21:26:53 crc kubenswrapper[4801]: I1124 21:26:53.978238 4801 generic.go:334] "Generic (PLEG): container finished" podID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerID="3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54" exitCode=0 Nov 24 21:26:53 crc kubenswrapper[4801]: I1124 21:26:53.979189 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ntxvj" event={"ID":"ac7b18f4-5d92-491c-a4f9-de94a69c61f1","Type":"ContainerDied","Data":"3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54"} Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.207742 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4z6pq"] Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.209757 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.227963 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e647-account-create-jzbwv"] Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.229898 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.237840 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.283923 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4z6pq"] Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.320487 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e647-account-create-jzbwv"] Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.359331 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-operator-scripts\") pod \"glance-db-create-4z6pq\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.359701 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62w2k\" (UniqueName: \"kubernetes.io/projected/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-kube-api-access-62w2k\") pod \"glance-db-create-4z6pq\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.359802 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wtf\" (UniqueName: \"kubernetes.io/projected/ac13fbea-bf18-449a-aa48-65aefa77699d-kube-api-access-w5wtf\") pod \"glance-e647-account-create-jzbwv\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.359908 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac13fbea-bf18-449a-aa48-65aefa77699d-operator-scripts\") pod \"glance-e647-account-create-jzbwv\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.360102 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:54 crc kubenswrapper[4801]: E1124 21:26:54.360531 4801 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:26:54 crc kubenswrapper[4801]: E1124 21:26:54.360548 4801 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:26:54 crc kubenswrapper[4801]: E1124 21:26:54.360601 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift podName:ba1dd9d3-072d-4cc1-b164-9701cb421564 nodeName:}" failed. No retries permitted until 2025-11-24 21:26:56.360583847 +0000 UTC m=+1188.443170717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift") pod "swift-storage-0" (UID: "ba1dd9d3-072d-4cc1-b164-9701cb421564") : configmap "swift-ring-files" not found Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.462919 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-operator-scripts\") pod \"glance-db-create-4z6pq\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.463279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62w2k\" (UniqueName: \"kubernetes.io/projected/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-kube-api-access-62w2k\") pod \"glance-db-create-4z6pq\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.463411 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wtf\" (UniqueName: \"kubernetes.io/projected/ac13fbea-bf18-449a-aa48-65aefa77699d-kube-api-access-w5wtf\") pod \"glance-e647-account-create-jzbwv\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.463527 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac13fbea-bf18-449a-aa48-65aefa77699d-operator-scripts\") pod \"glance-e647-account-create-jzbwv\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.464182 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-operator-scripts\") pod \"glance-db-create-4z6pq\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.464792 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac13fbea-bf18-449a-aa48-65aefa77699d-operator-scripts\") pod \"glance-e647-account-create-jzbwv\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.529390 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wtf\" (UniqueName: \"kubernetes.io/projected/ac13fbea-bf18-449a-aa48-65aefa77699d-kube-api-access-w5wtf\") pod \"glance-e647-account-create-jzbwv\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.531031 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62w2k\" (UniqueName: \"kubernetes.io/projected/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-kube-api-access-62w2k\") pod \"glance-db-create-4z6pq\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.568092 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4z6pq" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.584954 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.689974 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309c307d-d70f-45fb-be0d-62c446c3fecf" path="/var/lib/kubelet/pods/309c307d-d70f-45fb-be0d-62c446c3fecf/volumes" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.991893 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ntxvj" event={"ID":"ac7b18f4-5d92-491c-a4f9-de94a69c61f1","Type":"ContainerStarted","Data":"b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277"} Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.992408 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.995135 4801 generic.go:334] "Generic (PLEG): container finished" podID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerID="00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266" exitCode=0 Nov 24 21:26:54 crc kubenswrapper[4801]: I1124 21:26:54.995166 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerDied","Data":"00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266"} Nov 24 21:26:55 crc kubenswrapper[4801]: I1124 21:26:55.043378 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-ntxvj" podStartSLOduration=4.043321031 podStartE2EDuration="4.043321031s" podCreationTimestamp="2025-11-24 21:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:26:55.018923577 +0000 UTC m=+1187.101510237" watchObservedRunningTime="2025-11-24 21:26:55.043321031 +0000 UTC m=+1187.125907701" Nov 24 21:26:55 crc kubenswrapper[4801]: I1124 21:26:55.188490 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e647-account-create-jzbwv"] Nov 24 21:26:55 crc kubenswrapper[4801]: W1124 21:26:55.292506 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dd71c9_f4f7_42a5_97f4_1ba4c8b4ccd4.slice/crio-3eac2d02b04e6b497730989b794f128ae3ccdff251359bdb3ab71666c6d87013 WatchSource:0}: Error finding container 3eac2d02b04e6b497730989b794f128ae3ccdff251359bdb3ab71666c6d87013: Status 404 returned error can't find the container with id 3eac2d02b04e6b497730989b794f128ae3ccdff251359bdb3ab71666c6d87013 Nov 24 21:26:55 crc kubenswrapper[4801]: I1124 21:26:55.305122 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4z6pq"] Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.021448 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e647-account-create-jzbwv" event={"ID":"ac13fbea-bf18-449a-aa48-65aefa77699d","Type":"ContainerStarted","Data":"3d6a0e9b7d51c36e0449a256925ee79aece42eeb86dd7f932a2e22a326b46a41"} Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.028088 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4z6pq" event={"ID":"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4","Type":"ContainerStarted","Data":"3eac2d02b04e6b497730989b794f128ae3ccdff251359bdb3ab71666c6d87013"} Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.299238 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dxtv8"] Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.301207 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.307295 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.309865 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.310188 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.326738 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dxtv8"] Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.419883 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dxtv8"] Nov 24 21:26:56 crc kubenswrapper[4801]: E1124 21:26:56.422097 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-bgs7m ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-dxtv8" podUID="89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.432766 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rbr7q"] Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.434677 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.442295 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rbr7q"] Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449093 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-dispersionconf\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgs7m\" (UniqueName: \"kubernetes.io/projected/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-kube-api-access-bgs7m\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449198 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-ring-data-devices\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449228 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-scripts\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449320 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-swiftconf\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-etc-swift\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.449403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-combined-ca-bundle\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: E1124 21:26:56.449640 4801 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:26:56 crc kubenswrapper[4801]: E1124 21:26:56.449655 4801 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:26:56 crc kubenswrapper[4801]: E1124 21:26:56.449700 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift podName:ba1dd9d3-072d-4cc1-b164-9701cb421564 nodeName:}" failed. No retries permitted until 2025-11-24 21:27:00.449683061 +0000 UTC m=+1192.532269731 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift") pod "swift-storage-0" (UID: "ba1dd9d3-072d-4cc1-b164-9701cb421564") : configmap "swift-ring-files" not found Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.551877 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-swiftconf\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.551941 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-swiftconf\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.551966 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-etc-swift\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.552086 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2nk\" (UniqueName: \"kubernetes.io/projected/4add1738-d33e-4fc5-aaaf-ae28dcd88220-kube-api-access-vt2nk\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.552320 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-combined-ca-bundle\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.552602 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-etc-swift\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.552897 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4add1738-d33e-4fc5-aaaf-ae28dcd88220-etc-swift\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553030 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-dispersionconf\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553136 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-combined-ca-bundle\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553309 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgs7m\" (UniqueName: \"kubernetes.io/projected/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-kube-api-access-bgs7m\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553401 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-scripts\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553463 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-ring-data-devices\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553523 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-dispersionconf\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553647 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-scripts\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.553800 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-ring-data-devices\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.554403 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-scripts\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.554698 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-ring-data-devices\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.560639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-swiftconf\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.560765 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-dispersionconf\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.562732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-combined-ca-bundle\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.571620 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgs7m\" (UniqueName: \"kubernetes.io/projected/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-kube-api-access-bgs7m\") pod \"swift-ring-rebalance-dxtv8\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.657839 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4add1738-d33e-4fc5-aaaf-ae28dcd88220-etc-swift\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.658409 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-combined-ca-bundle\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.658689 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-scripts\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.658695 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4add1738-d33e-4fc5-aaaf-ae28dcd88220-etc-swift\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.659629 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-scripts\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.659151 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-dispersionconf\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.661108 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-ring-data-devices\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.661457 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-swiftconf\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.661604 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2nk\" (UniqueName: \"kubernetes.io/projected/4add1738-d33e-4fc5-aaaf-ae28dcd88220-kube-api-access-vt2nk\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.661791 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-ring-data-devices\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.665273 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-combined-ca-bundle\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.668265 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-dispersionconf\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.669296 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-swiftconf\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.714692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2nk\" (UniqueName: \"kubernetes.io/projected/4add1738-d33e-4fc5-aaaf-ae28dcd88220-kube-api-access-vt2nk\") pod \"swift-ring-rebalance-rbr7q\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:56 crc kubenswrapper[4801]: I1124 21:26:56.761968 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.052558 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.080606 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187321 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-combined-ca-bundle\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187431 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgs7m\" (UniqueName: \"kubernetes.io/projected/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-kube-api-access-bgs7m\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187489 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-scripts\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187510 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-swiftconf\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187533 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-ring-data-devices\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187691 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-dispersionconf\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.187813 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-etc-swift\") pod \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\" (UID: \"89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48\") " Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.188112 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-scripts" (OuterVolumeSpecName: "scripts") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.188166 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.188395 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.188687 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.188702 4801 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.188712 4801 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.196301 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.197664 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-kube-api-access-bgs7m" (OuterVolumeSpecName: "kube-api-access-bgs7m") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "kube-api-access-bgs7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.201868 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.218596 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" (UID: "89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.291495 4801 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.291543 4801 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.291557 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.291568 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgs7m\" (UniqueName: \"kubernetes.io/projected/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48-kube-api-access-bgs7m\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4801]: I1124 21:26:57.417084 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rbr7q"] Nov 24 21:26:57 crc kubenswrapper[4801]: W1124 21:26:57.420743 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4add1738_d33e_4fc5_aaaf_ae28dcd88220.slice/crio-a9c9f06f43db12ec95f6119c4044cf6ff694c0515dc8f54466caa6add05003d8 WatchSource:0}: Error finding container a9c9f06f43db12ec95f6119c4044cf6ff694c0515dc8f54466caa6add05003d8: Status 404 returned error can't find the container with id a9c9f06f43db12ec95f6119c4044cf6ff694c0515dc8f54466caa6add05003d8 Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.066414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbr7q" event={"ID":"4add1738-d33e-4fc5-aaaf-ae28dcd88220","Type":"ContainerStarted","Data":"a9c9f06f43db12ec95f6119c4044cf6ff694c0515dc8f54466caa6add05003d8"} Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.069035 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e647-account-create-jzbwv" event={"ID":"ac13fbea-bf18-449a-aa48-65aefa77699d","Type":"ContainerStarted","Data":"a2a78734a3e7c4dc8d8b5f6acff32b2f985aace6fc6f2f9874b5111e74c4e4b8"} Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.070603 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxtv8" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.070576 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4z6pq" event={"ID":"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4","Type":"ContainerStarted","Data":"3f97b4efe5a8f1855d96953c7fe2e5641a7bdc0db247a241073e48797642bdd5"} Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.095276 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e647-account-create-jzbwv" podStartSLOduration=4.095254316 podStartE2EDuration="4.095254316s" podCreationTimestamp="2025-11-24 21:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:26:58.093737139 +0000 UTC m=+1190.176323829" watchObservedRunningTime="2025-11-24 21:26:58.095254316 +0000 UTC m=+1190.177840986" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.186527 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dxtv8"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.199325 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-dxtv8"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.388726 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2qsxb"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.390750 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.406624 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2qsxb"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.520774 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bed4fea-aa42-417a-a051-c42b55b21835-operator-scripts\") pod \"keystone-db-create-2qsxb\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.520872 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhdg\" (UniqueName: \"kubernetes.io/projected/0bed4fea-aa42-417a-a051-c42b55b21835-kube-api-access-bjhdg\") pod \"keystone-db-create-2qsxb\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.522917 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7782-account-create-5lrd6"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.525867 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.528873 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.548808 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7782-account-create-5lrd6"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.623394 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a09d1697-e51b-43fa-95d5-6194022a7206-operator-scripts\") pod \"keystone-7782-account-create-5lrd6\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.623912 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hblkz\" (UniqueName: \"kubernetes.io/projected/a09d1697-e51b-43fa-95d5-6194022a7206-kube-api-access-hblkz\") pod \"keystone-7782-account-create-5lrd6\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.624064 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bed4fea-aa42-417a-a051-c42b55b21835-operator-scripts\") pod \"keystone-db-create-2qsxb\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.624301 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhdg\" (UniqueName: \"kubernetes.io/projected/0bed4fea-aa42-417a-a051-c42b55b21835-kube-api-access-bjhdg\") pod \"keystone-db-create-2qsxb\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.625577 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bed4fea-aa42-417a-a051-c42b55b21835-operator-scripts\") pod \"keystone-db-create-2qsxb\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.650585 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhdg\" (UniqueName: \"kubernetes.io/projected/0bed4fea-aa42-417a-a051-c42b55b21835-kube-api-access-bjhdg\") pod \"keystone-db-create-2qsxb\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.677944 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48" path="/var/lib/kubelet/pods/89e1a231-dfe7-4e98-bd3d-1b0c62cc5f48/volumes" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.728214 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hblkz\" (UniqueName: \"kubernetes.io/projected/a09d1697-e51b-43fa-95d5-6194022a7206-kube-api-access-hblkz\") pod \"keystone-7782-account-create-5lrd6\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.728418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a09d1697-e51b-43fa-95d5-6194022a7206-operator-scripts\") pod \"keystone-7782-account-create-5lrd6\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.729317 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a09d1697-e51b-43fa-95d5-6194022a7206-operator-scripts\") pod \"keystone-7782-account-create-5lrd6\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.748842 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pxnc7"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.750665 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.751032 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hblkz\" (UniqueName: \"kubernetes.io/projected/a09d1697-e51b-43fa-95d5-6194022a7206-kube-api-access-hblkz\") pod \"keystone-7782-account-create-5lrd6\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.762754 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pxnc7"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.776092 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2qsxb" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.835359 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6sf2\" (UniqueName: \"kubernetes.io/projected/a807d6ed-6672-4b56-b392-f20eadaaf913-kube-api-access-z6sf2\") pod \"placement-db-create-pxnc7\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.836033 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a807d6ed-6672-4b56-b392-f20eadaaf913-operator-scripts\") pod \"placement-db-create-pxnc7\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.862876 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.880422 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3ad7-account-create-jw6sj"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.883214 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.885040 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.908287 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3ad7-account-create-jw6sj"] Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.938467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6sf2\" (UniqueName: \"kubernetes.io/projected/a807d6ed-6672-4b56-b392-f20eadaaf913-kube-api-access-z6sf2\") pod \"placement-db-create-pxnc7\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.938679 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a807d6ed-6672-4b56-b392-f20eadaaf913-operator-scripts\") pod \"placement-db-create-pxnc7\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.940496 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a807d6ed-6672-4b56-b392-f20eadaaf913-operator-scripts\") pod \"placement-db-create-pxnc7\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:58 crc kubenswrapper[4801]: I1124 21:26:58.970528 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6sf2\" (UniqueName: \"kubernetes.io/projected/a807d6ed-6672-4b56-b392-f20eadaaf913-kube-api-access-z6sf2\") pod \"placement-db-create-pxnc7\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.053428 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvqvd\" (UniqueName: \"kubernetes.io/projected/8187888b-0911-4cd9-be77-4c373789c09a-kube-api-access-vvqvd\") pod \"placement-3ad7-account-create-jw6sj\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.053573 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8187888b-0911-4cd9-be77-4c373789c09a-operator-scripts\") pod \"placement-3ad7-account-create-jw6sj\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.094145 4801 generic.go:334] "Generic (PLEG): container finished" podID="ac13fbea-bf18-449a-aa48-65aefa77699d" containerID="a2a78734a3e7c4dc8d8b5f6acff32b2f985aace6fc6f2f9874b5111e74c4e4b8" exitCode=0 Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.094997 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e647-account-create-jzbwv" event={"ID":"ac13fbea-bf18-449a-aa48-65aefa77699d","Type":"ContainerDied","Data":"a2a78734a3e7c4dc8d8b5f6acff32b2f985aace6fc6f2f9874b5111e74c4e4b8"} Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.106792 4801 generic.go:334] "Generic (PLEG): container finished" podID="49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" containerID="3f97b4efe5a8f1855d96953c7fe2e5641a7bdc0db247a241073e48797642bdd5" exitCode=0 Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.106858 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4z6pq" event={"ID":"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4","Type":"ContainerDied","Data":"3f97b4efe5a8f1855d96953c7fe2e5641a7bdc0db247a241073e48797642bdd5"} Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.158605 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvqvd\" (UniqueName: \"kubernetes.io/projected/8187888b-0911-4cd9-be77-4c373789c09a-kube-api-access-vvqvd\") pod \"placement-3ad7-account-create-jw6sj\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.158689 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8187888b-0911-4cd9-be77-4c373789c09a-operator-scripts\") pod \"placement-3ad7-account-create-jw6sj\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.159892 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8187888b-0911-4cd9-be77-4c373789c09a-operator-scripts\") pod \"placement-3ad7-account-create-jw6sj\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.178997 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvqvd\" (UniqueName: \"kubernetes.io/projected/8187888b-0911-4cd9-be77-4c373789c09a-kube-api-access-vvqvd\") pod \"placement-3ad7-account-create-jw6sj\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.230690 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.240412 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxnc7" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.307176 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.375473 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2qsxb"] Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.582876 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7782-account-create-5lrd6"] Nov 24 21:26:59 crc kubenswrapper[4801]: W1124 21:26:59.618377 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda09d1697_e51b_43fa_95d5_6194022a7206.slice/crio-ddbf87037c452d0510a40a63e679e9a42792c8716c0c6b4f9d62dc579a2aec70 WatchSource:0}: Error finding container ddbf87037c452d0510a40a63e679e9a42792c8716c0c6b4f9d62dc579a2aec70: Status 404 returned error can't find the container with id ddbf87037c452d0510a40a63e679e9a42792c8716c0c6b4f9d62dc579a2aec70 Nov 24 21:26:59 crc kubenswrapper[4801]: I1124 21:26:59.848617 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pxnc7"] Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.084406 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3ad7-account-create-jw6sj"] Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.199245 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7782-account-create-5lrd6" event={"ID":"a09d1697-e51b-43fa-95d5-6194022a7206","Type":"ContainerStarted","Data":"abb9ba962e41dcb3ebd31ee64b19302aa98a627cc182138fee6759bab49db55a"} Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.199320 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7782-account-create-5lrd6" event={"ID":"a09d1697-e51b-43fa-95d5-6194022a7206","Type":"ContainerStarted","Data":"ddbf87037c452d0510a40a63e679e9a42792c8716c0c6b4f9d62dc579a2aec70"} Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.205196 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ad7-account-create-jw6sj" event={"ID":"8187888b-0911-4cd9-be77-4c373789c09a","Type":"ContainerStarted","Data":"7baa817b4fac20b9bf186d0b90b75a0f44613be285618751abac4cffdde8b431"} Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.210647 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pxnc7" event={"ID":"a807d6ed-6672-4b56-b392-f20eadaaf913","Type":"ContainerStarted","Data":"31fb54e9cc489cfce70c28ab5e620a32ee91abf0a7b8fb3aaeab264216a7bb2a"} Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.230197 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7782-account-create-5lrd6" podStartSLOduration=2.230174255 podStartE2EDuration="2.230174255s" podCreationTimestamp="2025-11-24 21:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:00.222609827 +0000 UTC m=+1192.305196497" watchObservedRunningTime="2025-11-24 21:27:00.230174255 +0000 UTC m=+1192.312760915" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.238477 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2qsxb" event={"ID":"0bed4fea-aa42-417a-a051-c42b55b21835","Type":"ContainerStarted","Data":"21810e02d126300bcd4cbf37bb31dcd054ee9ad4165ba74a5f59abdcaed6f782"} Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.238563 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2qsxb" event={"ID":"0bed4fea-aa42-417a-a051-c42b55b21835","Type":"ContainerStarted","Data":"5baf9eab00d0c88f5bfb64e61520d364cb62c28ab28a59a4e9c8c996f885bd5c"} Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.277930 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-2qsxb" podStartSLOduration=2.277909471 podStartE2EDuration="2.277909471s" podCreationTimestamp="2025-11-24 21:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:00.266150802 +0000 UTC m=+1192.348737472" watchObservedRunningTime="2025-11-24 21:27:00.277909471 +0000 UTC m=+1192.360496141" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.543579 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:27:00 crc kubenswrapper[4801]: E1124 21:27:00.543989 4801 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:27:00 crc kubenswrapper[4801]: E1124 21:27:00.546131 4801 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:27:00 crc kubenswrapper[4801]: E1124 21:27:00.546224 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift podName:ba1dd9d3-072d-4cc1-b164-9701cb421564 nodeName:}" failed. No retries permitted until 2025-11-24 21:27:08.546194168 +0000 UTC m=+1200.628780838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift") pod "swift-storage-0" (UID: "ba1dd9d3-072d-4cc1-b164-9701cb421564") : configmap "swift-ring-files" not found Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.737664 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cflfb"] Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.739487 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.752700 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cflfb"] Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.752767 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwnjm\" (UniqueName: \"kubernetes.io/projected/d7df3222-fe4f-4726-beac-4ba9e699368b-kube-api-access-rwnjm\") pod \"mysqld-exporter-openstack-db-create-cflfb\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.752982 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7df3222-fe4f-4726-beac-4ba9e699368b-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cflfb\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.855667 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwnjm\" (UniqueName: \"kubernetes.io/projected/d7df3222-fe4f-4726-beac-4ba9e699368b-kube-api-access-rwnjm\") pod \"mysqld-exporter-openstack-db-create-cflfb\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.856341 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7df3222-fe4f-4726-beac-4ba9e699368b-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cflfb\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.859471 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7df3222-fe4f-4726-beac-4ba9e699368b-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cflfb\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:00 crc kubenswrapper[4801]: I1124 21:27:00.887675 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwnjm\" (UniqueName: \"kubernetes.io/projected/d7df3222-fe4f-4726-beac-4ba9e699368b-kube-api-access-rwnjm\") pod \"mysqld-exporter-openstack-db-create-cflfb\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.057948 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-4b53-account-create-ggbmv"] Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.061607 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.068939 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.095010 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4b53-account-create-ggbmv"] Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.097353 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.108274 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:01 crc kubenswrapper[4801]: E1124 21:27:01.111087 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8187888b_0911_4cd9_be77_4c373789c09a.slice/crio-039125d23328e6804789d952815adf77ac7263c9c48905f3e16350d1b320c44a.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.165061 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-operator-scripts\") pod \"mysqld-exporter-4b53-account-create-ggbmv\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.165226 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6692\" (UniqueName: \"kubernetes.io/projected/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-kube-api-access-d6692\") pod \"mysqld-exporter-4b53-account-create-ggbmv\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.257567 4801 generic.go:334] "Generic (PLEG): container finished" podID="a807d6ed-6672-4b56-b392-f20eadaaf913" containerID="511bb2dc02b4db9b12da879be614351ab65fa89176cee1f74148c9d7f3a4601e" exitCode=0 Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.257694 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pxnc7" event={"ID":"a807d6ed-6672-4b56-b392-f20eadaaf913","Type":"ContainerDied","Data":"511bb2dc02b4db9b12da879be614351ab65fa89176cee1f74148c9d7f3a4601e"} Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.269831 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6692\" (UniqueName: \"kubernetes.io/projected/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-kube-api-access-d6692\") pod \"mysqld-exporter-4b53-account-create-ggbmv\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.269998 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-operator-scripts\") pod \"mysqld-exporter-4b53-account-create-ggbmv\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.270138 4801 generic.go:334] "Generic (PLEG): container finished" podID="0bed4fea-aa42-417a-a051-c42b55b21835" containerID="21810e02d126300bcd4cbf37bb31dcd054ee9ad4165ba74a5f59abdcaed6f782" exitCode=0 Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.270916 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-operator-scripts\") pod \"mysqld-exporter-4b53-account-create-ggbmv\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.270356 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2qsxb" event={"ID":"0bed4fea-aa42-417a-a051-c42b55b21835","Type":"ContainerDied","Data":"21810e02d126300bcd4cbf37bb31dcd054ee9ad4165ba74a5f59abdcaed6f782"} Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.273260 4801 generic.go:334] "Generic (PLEG): container finished" podID="a09d1697-e51b-43fa-95d5-6194022a7206" containerID="abb9ba962e41dcb3ebd31ee64b19302aa98a627cc182138fee6759bab49db55a" exitCode=0 Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.273497 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7782-account-create-5lrd6" event={"ID":"a09d1697-e51b-43fa-95d5-6194022a7206","Type":"ContainerDied","Data":"abb9ba962e41dcb3ebd31ee64b19302aa98a627cc182138fee6759bab49db55a"} Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.284057 4801 generic.go:334] "Generic (PLEG): container finished" podID="8187888b-0911-4cd9-be77-4c373789c09a" containerID="039125d23328e6804789d952815adf77ac7263c9c48905f3e16350d1b320c44a" exitCode=0 Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.284141 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ad7-account-create-jw6sj" event={"ID":"8187888b-0911-4cd9-be77-4c373789c09a","Type":"ContainerDied","Data":"039125d23328e6804789d952815adf77ac7263c9c48905f3e16350d1b320c44a"} Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.290312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6692\" (UniqueName: \"kubernetes.io/projected/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-kube-api-access-d6692\") pod \"mysqld-exporter-4b53-account-create-ggbmv\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.393012 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.702610 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.776415 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2m66p"] Nov 24 21:27:01 crc kubenswrapper[4801]: I1124 21:27:01.781792 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="dnsmasq-dns" containerID="cri-o://b244eddb4214977cf5f9c16891d15d88c115ab3a91a40b167a04e1d9bbaa6a29" gracePeriod=10 Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.296850 4801 generic.go:334] "Generic (PLEG): container finished" podID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerID="b244eddb4214977cf5f9c16891d15d88c115ab3a91a40b167a04e1d9bbaa6a29" exitCode=0 Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.296967 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" event={"ID":"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85","Type":"ContainerDied","Data":"b244eddb4214977cf5f9c16891d15d88c115ab3a91a40b167a04e1d9bbaa6a29"} Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.603418 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4z6pq" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.619302 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.729045 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac13fbea-bf18-449a-aa48-65aefa77699d-operator-scripts\") pod \"ac13fbea-bf18-449a-aa48-65aefa77699d\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.729142 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-operator-scripts\") pod \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.729415 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62w2k\" (UniqueName: \"kubernetes.io/projected/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-kube-api-access-62w2k\") pod \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\" (UID: \"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4\") " Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.729510 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5wtf\" (UniqueName: \"kubernetes.io/projected/ac13fbea-bf18-449a-aa48-65aefa77699d-kube-api-access-w5wtf\") pod \"ac13fbea-bf18-449a-aa48-65aefa77699d\" (UID: \"ac13fbea-bf18-449a-aa48-65aefa77699d\") " Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.732763 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" (UID: "49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.733177 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac13fbea-bf18-449a-aa48-65aefa77699d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac13fbea-bf18-449a-aa48-65aefa77699d" (UID: "ac13fbea-bf18-449a-aa48-65aefa77699d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.742335 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-kube-api-access-62w2k" (OuterVolumeSpecName: "kube-api-access-62w2k") pod "49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" (UID: "49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4"). InnerVolumeSpecName "kube-api-access-62w2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.771883 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac13fbea-bf18-449a-aa48-65aefa77699d-kube-api-access-w5wtf" (OuterVolumeSpecName: "kube-api-access-w5wtf") pod "ac13fbea-bf18-449a-aa48-65aefa77699d" (UID: "ac13fbea-bf18-449a-aa48-65aefa77699d"). InnerVolumeSpecName "kube-api-access-w5wtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.838776 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62w2k\" (UniqueName: \"kubernetes.io/projected/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-kube-api-access-62w2k\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.838806 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5wtf\" (UniqueName: \"kubernetes.io/projected/ac13fbea-bf18-449a-aa48-65aefa77699d-kube-api-access-w5wtf\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.838818 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac13fbea-bf18-449a-aa48-65aefa77699d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.839072 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:02 crc kubenswrapper[4801]: I1124 21:27:02.839525 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 24 21:27:03 crc kubenswrapper[4801]: I1124 21:27:03.348662 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e647-account-create-jzbwv" event={"ID":"ac13fbea-bf18-449a-aa48-65aefa77699d","Type":"ContainerDied","Data":"3d6a0e9b7d51c36e0449a256925ee79aece42eeb86dd7f932a2e22a326b46a41"} Nov 24 21:27:03 crc kubenswrapper[4801]: I1124 21:27:03.348761 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6a0e9b7d51c36e0449a256925ee79aece42eeb86dd7f932a2e22a326b46a41" Nov 24 21:27:03 crc kubenswrapper[4801]: I1124 21:27:03.348698 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e647-account-create-jzbwv" Nov 24 21:27:03 crc kubenswrapper[4801]: I1124 21:27:03.352172 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4z6pq" event={"ID":"49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4","Type":"ContainerDied","Data":"3eac2d02b04e6b497730989b794f128ae3ccdff251359bdb3ab71666c6d87013"} Nov 24 21:27:03 crc kubenswrapper[4801]: I1124 21:27:03.352227 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eac2d02b04e6b497730989b794f128ae3ccdff251359bdb3ab71666c6d87013" Nov 24 21:27:03 crc kubenswrapper[4801]: I1124 21:27:03.352197 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4z6pq" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.448466 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-m4tng"] Nov 24 21:27:04 crc kubenswrapper[4801]: E1124 21:27:04.455040 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac13fbea-bf18-449a-aa48-65aefa77699d" containerName="mariadb-account-create" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.455083 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac13fbea-bf18-449a-aa48-65aefa77699d" containerName="mariadb-account-create" Nov 24 21:27:04 crc kubenswrapper[4801]: E1124 21:27:04.455145 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" containerName="mariadb-database-create" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.455157 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" containerName="mariadb-database-create" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.456676 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac13fbea-bf18-449a-aa48-65aefa77699d" containerName="mariadb-account-create" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.456730 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" containerName="mariadb-database-create" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.458467 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.462542 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-68kgz" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.462873 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.510194 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-m4tng"] Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.609203 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-db-sync-config-data\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.609595 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-config-data\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.609696 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/b7d70dfa-a9e5-417b-9506-95bd490da3ef-kube-api-access-x25bk\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.609999 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-combined-ca-bundle\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.712774 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-config-data\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.714297 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/b7d70dfa-a9e5-417b-9506-95bd490da3ef-kube-api-access-x25bk\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.714456 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-combined-ca-bundle\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.714628 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-db-sync-config-data\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.719705 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-combined-ca-bundle\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.722166 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-db-sync-config-data\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.732919 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-config-data\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.733796 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/b7d70dfa-a9e5-417b-9506-95bd490da3ef-kube-api-access-x25bk\") pod \"glance-db-sync-m4tng\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:04 crc kubenswrapper[4801]: I1124 21:27:04.793441 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.000063 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2qsxb" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.125803 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bed4fea-aa42-417a-a051-c42b55b21835-operator-scripts\") pod \"0bed4fea-aa42-417a-a051-c42b55b21835\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.125964 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjhdg\" (UniqueName: \"kubernetes.io/projected/0bed4fea-aa42-417a-a051-c42b55b21835-kube-api-access-bjhdg\") pod \"0bed4fea-aa42-417a-a051-c42b55b21835\" (UID: \"0bed4fea-aa42-417a-a051-c42b55b21835\") " Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.127217 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bed4fea-aa42-417a-a051-c42b55b21835-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bed4fea-aa42-417a-a051-c42b55b21835" (UID: "0bed4fea-aa42-417a-a051-c42b55b21835"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.135767 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bed4fea-aa42-417a-a051-c42b55b21835-kube-api-access-bjhdg" (OuterVolumeSpecName: "kube-api-access-bjhdg") pod "0bed4fea-aa42-417a-a051-c42b55b21835" (UID: "0bed4fea-aa42-417a-a051-c42b55b21835"). InnerVolumeSpecName "kube-api-access-bjhdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.229801 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bed4fea-aa42-417a-a051-c42b55b21835-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.229862 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjhdg\" (UniqueName: \"kubernetes.io/projected/0bed4fea-aa42-417a-a051-c42b55b21835-kube-api-access-bjhdg\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.382238 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2qsxb" Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.382267 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2qsxb" event={"ID":"0bed4fea-aa42-417a-a051-c42b55b21835","Type":"ContainerDied","Data":"5baf9eab00d0c88f5bfb64e61520d364cb62c28ab28a59a4e9c8c996f885bd5c"} Nov 24 21:27:05 crc kubenswrapper[4801]: I1124 21:27:05.383147 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5baf9eab00d0c88f5bfb64e61520d364cb62c28ab28a59a4e9c8c996f885bd5c" Nov 24 21:27:08 crc kubenswrapper[4801]: I1124 21:27:08.636976 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:27:08 crc kubenswrapper[4801]: E1124 21:27:08.637316 4801 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:27:08 crc kubenswrapper[4801]: E1124 21:27:08.638203 4801 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:27:08 crc kubenswrapper[4801]: E1124 21:27:08.638349 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift podName:ba1dd9d3-072d-4cc1-b164-9701cb421564 nodeName:}" failed. No retries permitted until 2025-11-24 21:27:24.638311339 +0000 UTC m=+1216.720898029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift") pod "swift-storage-0" (UID: "ba1dd9d3-072d-4cc1-b164-9701cb421564") : configmap "swift-ring-files" not found Nov 24 21:27:08 crc kubenswrapper[4801]: I1124 21:27:08.766483 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qm4tj" podUID="89430698-4742-4f29-93c4-ecd964255e62" containerName="ovn-controller" probeResult="failure" output=< Nov 24 21:27:08 crc kubenswrapper[4801]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 21:27:08 crc kubenswrapper[4801]: > Nov 24 21:27:08 crc kubenswrapper[4801]: I1124 21:27:08.867094 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.194352 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxnc7" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.199020 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.207323 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.228837 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.264207 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a09d1697-e51b-43fa-95d5-6194022a7206-operator-scripts\") pod \"a09d1697-e51b-43fa-95d5-6194022a7206\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.264830 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hblkz\" (UniqueName: \"kubernetes.io/projected/a09d1697-e51b-43fa-95d5-6194022a7206-kube-api-access-hblkz\") pod \"a09d1697-e51b-43fa-95d5-6194022a7206\" (UID: \"a09d1697-e51b-43fa-95d5-6194022a7206\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.264871 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a807d6ed-6672-4b56-b392-f20eadaaf913-operator-scripts\") pod \"a807d6ed-6672-4b56-b392-f20eadaaf913\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.265027 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6sf2\" (UniqueName: \"kubernetes.io/projected/a807d6ed-6672-4b56-b392-f20eadaaf913-kube-api-access-z6sf2\") pod \"a807d6ed-6672-4b56-b392-f20eadaaf913\" (UID: \"a807d6ed-6672-4b56-b392-f20eadaaf913\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.267702 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a09d1697-e51b-43fa-95d5-6194022a7206-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a09d1697-e51b-43fa-95d5-6194022a7206" (UID: "a09d1697-e51b-43fa-95d5-6194022a7206"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.270782 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a807d6ed-6672-4b56-b392-f20eadaaf913-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a807d6ed-6672-4b56-b392-f20eadaaf913" (UID: "a807d6ed-6672-4b56-b392-f20eadaaf913"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.278290 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09d1697-e51b-43fa-95d5-6194022a7206-kube-api-access-hblkz" (OuterVolumeSpecName: "kube-api-access-hblkz") pod "a09d1697-e51b-43fa-95d5-6194022a7206" (UID: "a09d1697-e51b-43fa-95d5-6194022a7206"). InnerVolumeSpecName "kube-api-access-hblkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.278418 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a807d6ed-6672-4b56-b392-f20eadaaf913-kube-api-access-z6sf2" (OuterVolumeSpecName: "kube-api-access-z6sf2") pod "a807d6ed-6672-4b56-b392-f20eadaaf913" (UID: "a807d6ed-6672-4b56-b392-f20eadaaf913"). InnerVolumeSpecName "kube-api-access-z6sf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8187888b-0911-4cd9-be77-4c373789c09a-operator-scripts\") pod \"8187888b-0911-4cd9-be77-4c373789c09a\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368350 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-config\") pod \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368482 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-sb\") pod \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368509 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-nb\") pod \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368561 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqtcz\" (UniqueName: \"kubernetes.io/projected/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-kube-api-access-mqtcz\") pod \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368651 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc\") pod \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.368777 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvqvd\" (UniqueName: \"kubernetes.io/projected/8187888b-0911-4cd9-be77-4c373789c09a-kube-api-access-vvqvd\") pod \"8187888b-0911-4cd9-be77-4c373789c09a\" (UID: \"8187888b-0911-4cd9-be77-4c373789c09a\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.369383 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6sf2\" (UniqueName: \"kubernetes.io/projected/a807d6ed-6672-4b56-b392-f20eadaaf913-kube-api-access-z6sf2\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.369402 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a09d1697-e51b-43fa-95d5-6194022a7206-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.369412 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hblkz\" (UniqueName: \"kubernetes.io/projected/a09d1697-e51b-43fa-95d5-6194022a7206-kube-api-access-hblkz\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.369421 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a807d6ed-6672-4b56-b392-f20eadaaf913-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.370335 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8187888b-0911-4cd9-be77-4c373789c09a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8187888b-0911-4cd9-be77-4c373789c09a" (UID: "8187888b-0911-4cd9-be77-4c373789c09a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.374267 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-kube-api-access-mqtcz" (OuterVolumeSpecName: "kube-api-access-mqtcz") pod "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" (UID: "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85"). InnerVolumeSpecName "kube-api-access-mqtcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.374547 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8187888b-0911-4cd9-be77-4c373789c09a-kube-api-access-vvqvd" (OuterVolumeSpecName: "kube-api-access-vvqvd") pod "8187888b-0911-4cd9-be77-4c373789c09a" (UID: "8187888b-0911-4cd9-be77-4c373789c09a"). InnerVolumeSpecName "kube-api-access-vvqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.463303 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" event={"ID":"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85","Type":"ContainerDied","Data":"3e242d632d39917f704e1a4bebb677979981678eda3ecbcd8f3edc761571c50d"} Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.463557 4801 scope.go:117] "RemoveContainer" containerID="b244eddb4214977cf5f9c16891d15d88c115ab3a91a40b167a04e1d9bbaa6a29" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.464028 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.469327 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7782-account-create-5lrd6" event={"ID":"a09d1697-e51b-43fa-95d5-6194022a7206","Type":"ContainerDied","Data":"ddbf87037c452d0510a40a63e679e9a42792c8716c0c6b4f9d62dc579a2aec70"} Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.469386 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddbf87037c452d0510a40a63e679e9a42792c8716c0c6b4f9d62dc579a2aec70" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.469811 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" (UID: "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.469951 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7782-account-create-5lrd6" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.470765 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc\") pod \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\" (UID: \"e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85\") " Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.471602 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqtcz\" (UniqueName: \"kubernetes.io/projected/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-kube-api-access-mqtcz\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.471622 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvqvd\" (UniqueName: \"kubernetes.io/projected/8187888b-0911-4cd9-be77-4c373789c09a-kube-api-access-vvqvd\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.471634 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8187888b-0911-4cd9-be77-4c373789c09a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: W1124 21:27:09.471704 4801 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85/volumes/kubernetes.io~configmap/dns-svc Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.471719 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" (UID: "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.479222 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-config" (OuterVolumeSpecName: "config") pod "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" (UID: "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.480739 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ad7-account-create-jw6sj" event={"ID":"8187888b-0911-4cd9-be77-4c373789c09a","Type":"ContainerDied","Data":"7baa817b4fac20b9bf186d0b90b75a0f44613be285618751abac4cffdde8b431"} Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.480791 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7baa817b4fac20b9bf186d0b90b75a0f44613be285618751abac4cffdde8b431" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.480860 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ad7-account-create-jw6sj" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.488813 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pxnc7" event={"ID":"a807d6ed-6672-4b56-b392-f20eadaaf913","Type":"ContainerDied","Data":"31fb54e9cc489cfce70c28ab5e620a32ee91abf0a7b8fb3aaeab264216a7bb2a"} Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.488872 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31fb54e9cc489cfce70c28ab5e620a32ee91abf0a7b8fb3aaeab264216a7bb2a" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.489014 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxnc7" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.489146 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" (UID: "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.500476 4801 scope.go:117] "RemoveContainer" containerID="f917db9e2d3873179c9db9c514b73f792b2af239e76c7532df3d37f604ffb182" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.532687 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" (UID: "e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.575705 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.576281 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.576719 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.576785 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4801]: W1124 21:27:09.583672 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7df3222_fe4f_4726_beac_4ba9e699368b.slice/crio-70056a4b4dddf1191fae26d761a00e9f5d7d62db85f154af6269548be88b6a22 WatchSource:0}: Error finding container 70056a4b4dddf1191fae26d761a00e9f5d7d62db85f154af6269548be88b6a22: Status 404 returned error can't find the container with id 70056a4b4dddf1191fae26d761a00e9f5d7d62db85f154af6269548be88b6a22 Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.584798 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cflfb"] Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.593090 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4b53-account-create-ggbmv"] Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.645691 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.830567 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2m66p"] Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.844225 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2m66p"] Nov 24 21:27:09 crc kubenswrapper[4801]: I1124 21:27:09.862849 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-m4tng"] Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.618252 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m4tng" event={"ID":"b7d70dfa-a9e5-417b-9506-95bd490da3ef","Type":"ContainerStarted","Data":"6885f9df5cc6fdf7340e5dd96c2db367f67e843724bb9f686439c5b478a969c4"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.639593 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbr7q" event={"ID":"4add1738-d33e-4fc5-aaaf-ae28dcd88220","Type":"ContainerStarted","Data":"3649f6f88aa815d817ebf731f09816ac43862497b5593870c562acf8f7787439"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.663465 4801 generic.go:334] "Generic (PLEG): container finished" podID="c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" containerID="d6124ae5b783ea07f780ddf37adc7bdf3fcc82ec43aa13d7b459e92276827f8f" exitCode=0 Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.693256 4801 generic.go:334] "Generic (PLEG): container finished" podID="d7df3222-fe4f-4726-beac-4ba9e699368b" containerID="f215d627d7e6db0cc079f215d69d87bbb55ff7021310e31bd20016ab8ba7afa3" exitCode=0 Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.693541 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" path="/var/lib/kubelet/pods/e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85/volumes" Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.695191 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" event={"ID":"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d","Type":"ContainerDied","Data":"d6124ae5b783ea07f780ddf37adc7bdf3fcc82ec43aa13d7b459e92276827f8f"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.695310 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" event={"ID":"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d","Type":"ContainerStarted","Data":"7a743d642881ecd0483c18791b47215a75c7e255422bf8bb83a0d5e497febaaa"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.695331 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" event={"ID":"d7df3222-fe4f-4726-beac-4ba9e699368b","Type":"ContainerDied","Data":"f215d627d7e6db0cc079f215d69d87bbb55ff7021310e31bd20016ab8ba7afa3"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.695345 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" event={"ID":"d7df3222-fe4f-4726-beac-4ba9e699368b","Type":"ContainerStarted","Data":"70056a4b4dddf1191fae26d761a00e9f5d7d62db85f154af6269548be88b6a22"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.701340 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerStarted","Data":"58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27"} Nov 24 21:27:10 crc kubenswrapper[4801]: I1124 21:27:10.704568 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rbr7q" podStartSLOduration=3.068891876 podStartE2EDuration="14.704544526s" podCreationTimestamp="2025-11-24 21:26:56 +0000 UTC" firstStartedPulling="2025-11-24 21:26:57.423082713 +0000 UTC m=+1189.505669373" lastFinishedPulling="2025-11-24 21:27:09.058735353 +0000 UTC m=+1201.141322023" observedRunningTime="2025-11-24 21:27:10.700783409 +0000 UTC m=+1202.783370079" watchObservedRunningTime="2025-11-24 21:27:10.704544526 +0000 UTC m=+1202.787131196" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.527438 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.541357 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.585873 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6692\" (UniqueName: \"kubernetes.io/projected/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-kube-api-access-d6692\") pod \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.586197 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-operator-scripts\") pod \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\" (UID: \"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d\") " Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.587764 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" (UID: "c44d60b6-4a0a-47dd-bc64-350bd34f5a4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.596826 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-kube-api-access-d6692" (OuterVolumeSpecName: "kube-api-access-d6692") pod "c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" (UID: "c44d60b6-4a0a-47dd-bc64-350bd34f5a4d"). InnerVolumeSpecName "kube-api-access-d6692". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.688233 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwnjm\" (UniqueName: \"kubernetes.io/projected/d7df3222-fe4f-4726-beac-4ba9e699368b-kube-api-access-rwnjm\") pod \"d7df3222-fe4f-4726-beac-4ba9e699368b\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.688630 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7df3222-fe4f-4726-beac-4ba9e699368b-operator-scripts\") pod \"d7df3222-fe4f-4726-beac-4ba9e699368b\" (UID: \"d7df3222-fe4f-4726-beac-4ba9e699368b\") " Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.689404 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7df3222-fe4f-4726-beac-4ba9e699368b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7df3222-fe4f-4726-beac-4ba9e699368b" (UID: "d7df3222-fe4f-4726-beac-4ba9e699368b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.691344 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6692\" (UniqueName: \"kubernetes.io/projected/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-kube-api-access-d6692\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.691424 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7df3222-fe4f-4726-beac-4ba9e699368b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.691440 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.692494 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7df3222-fe4f-4726-beac-4ba9e699368b-kube-api-access-rwnjm" (OuterVolumeSpecName: "kube-api-access-rwnjm") pod "d7df3222-fe4f-4726-beac-4ba9e699368b" (UID: "d7df3222-fe4f-4726-beac-4ba9e699368b"). InnerVolumeSpecName "kube-api-access-rwnjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.753328 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.753347 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cflfb" event={"ID":"d7df3222-fe4f-4726-beac-4ba9e699368b","Type":"ContainerDied","Data":"70056a4b4dddf1191fae26d761a00e9f5d7d62db85f154af6269548be88b6a22"} Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.753419 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70056a4b4dddf1191fae26d761a00e9f5d7d62db85f154af6269548be88b6a22" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.761520 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" event={"ID":"c44d60b6-4a0a-47dd-bc64-350bd34f5a4d","Type":"ContainerDied","Data":"7a743d642881ecd0483c18791b47215a75c7e255422bf8bb83a0d5e497febaaa"} Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.761560 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a743d642881ecd0483c18791b47215a75c7e255422bf8bb83a0d5e497febaaa" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.761610 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4b53-account-create-ggbmv" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.796538 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwnjm\" (UniqueName: \"kubernetes.io/projected/d7df3222-fe4f-4726-beac-4ba9e699368b-kube-api-access-rwnjm\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:12 crc kubenswrapper[4801]: I1124 21:27:12.837325 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-2m66p" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Nov 24 21:27:13 crc kubenswrapper[4801]: I1124 21:27:13.258485 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:27:13 crc kubenswrapper[4801]: I1124 21:27:13.790797 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qm4tj" podUID="89430698-4742-4f29-93c4-ecd964255e62" containerName="ovn-controller" probeResult="failure" output=< Nov 24 21:27:13 crc kubenswrapper[4801]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 21:27:13 crc kubenswrapper[4801]: > Nov 24 21:27:13 crc kubenswrapper[4801]: I1124 21:27:13.792843 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerStarted","Data":"a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8"} Nov 24 21:27:13 crc kubenswrapper[4801]: I1124 21:27:13.870193 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sbns4" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.106860 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qm4tj-config-8cx8t"] Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108548 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="dnsmasq-dns" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108567 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="dnsmasq-dns" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108586 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7df3222-fe4f-4726-beac-4ba9e699368b" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108595 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7df3222-fe4f-4726-beac-4ba9e699368b" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108611 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="init" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108618 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="init" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108643 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a807d6ed-6672-4b56-b392-f20eadaaf913" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108650 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a807d6ed-6672-4b56-b392-f20eadaaf913" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108666 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8187888b-0911-4cd9-be77-4c373789c09a" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108672 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8187888b-0911-4cd9-be77-4c373789c09a" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108686 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bed4fea-aa42-417a-a051-c42b55b21835" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108694 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bed4fea-aa42-417a-a051-c42b55b21835" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108709 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108717 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: E1124 21:27:14.108729 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09d1697-e51b-43fa-95d5-6194022a7206" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108736 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09d1697-e51b-43fa-95d5-6194022a7206" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108962 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108972 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7df3222-fe4f-4726-beac-4ba9e699368b" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108984 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09d1697-e51b-43fa-95d5-6194022a7206" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.108997 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8187888b-0911-4cd9-be77-4c373789c09a" containerName="mariadb-account-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.109004 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bed4fea-aa42-417a-a051-c42b55b21835" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.109022 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ffb7f2-e15c-4d53-a87d-7b5d09a39b85" containerName="dnsmasq-dns" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.109032 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a807d6ed-6672-4b56-b392-f20eadaaf913" containerName="mariadb-database-create" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.110029 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.112984 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.135562 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qm4tj-config-8cx8t"] Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.239490 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-scripts\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.239860 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69cd\" (UniqueName: \"kubernetes.io/projected/e05598fd-7d96-4483-b94f-4bdb9511ebb5-kube-api-access-m69cd\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.240191 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-additional-scripts\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.240239 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-log-ovn\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.240307 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run-ovn\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.240401 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.346801 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-additional-scripts\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.346867 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-log-ovn\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.346914 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run-ovn\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.346955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.347082 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-scripts\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.347213 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m69cd\" (UniqueName: \"kubernetes.io/projected/e05598fd-7d96-4483-b94f-4bdb9511ebb5-kube-api-access-m69cd\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.347400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-log-ovn\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.347429 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run-ovn\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.347550 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.347893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-additional-scripts\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.349590 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-scripts\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.368251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m69cd\" (UniqueName: \"kubernetes.io/projected/e05598fd-7d96-4483-b94f-4bdb9511ebb5-kube-api-access-m69cd\") pod \"ovn-controller-qm4tj-config-8cx8t\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.431166 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.816259 4801 generic.go:334] "Generic (PLEG): container finished" podID="052262eb-3362-4169-a9e2-96e364d20be8" containerID="dcfc6398313ef14cefed31e1a783d37f942b5f8c01dcd5371474a4e544086dd1" exitCode=0 Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.816770 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"052262eb-3362-4169-a9e2-96e364d20be8","Type":"ContainerDied","Data":"dcfc6398313ef14cefed31e1a783d37f942b5f8c01dcd5371474a4e544086dd1"} Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.829636 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"fb8472fa-9a35-4787-b38c-0c657881d910","Type":"ContainerDied","Data":"d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18"} Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.827775 4801 generic.go:334] "Generic (PLEG): container finished" podID="fb8472fa-9a35-4787-b38c-0c657881d910" containerID="d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18" exitCode=0 Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.850578 4801 generic.go:334] "Generic (PLEG): container finished" podID="af143054-b9a1-432a-a0f8-9f489550bd24" containerID="b56a953951e0148a3ed7af819ebbd4de7c7816d9238af1bd0bd53a4ca0afb848" exitCode=0 Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.850685 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"af143054-b9a1-432a-a0f8-9f489550bd24","Type":"ContainerDied","Data":"b56a953951e0148a3ed7af819ebbd4de7c7816d9238af1bd0bd53a4ca0afb848"} Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.853148 4801 generic.go:334] "Generic (PLEG): container finished" podID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerID="a857a39ae5ed62c45ea72764d9ae7bed043f8de581aa188be2bce5c73eed473f" exitCode=0 Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.853197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56f017-0f5c-4eb2-b3be-44db75365483","Type":"ContainerDied","Data":"a857a39ae5ed62c45ea72764d9ae7bed043f8de581aa188be2bce5c73eed473f"} Nov 24 21:27:14 crc kubenswrapper[4801]: I1124 21:27:14.974447 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qm4tj-config-8cx8t"] Nov 24 21:27:15 crc kubenswrapper[4801]: W1124 21:27:15.008350 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode05598fd_7d96_4483_b94f_4bdb9511ebb5.slice/crio-59e75dedf5f2097bdc87c0e0c91a1f9ee8e229b46af14a329233aec9a3908f52 WatchSource:0}: Error finding container 59e75dedf5f2097bdc87c0e0c91a1f9ee8e229b46af14a329233aec9a3908f52: Status 404 returned error can't find the container with id 59e75dedf5f2097bdc87c0e0c91a1f9ee8e229b46af14a329233aec9a3908f52 Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.873969 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"052262eb-3362-4169-a9e2-96e364d20be8","Type":"ContainerStarted","Data":"fc1f3038cdaabe8ddb5a1dbea4535a51d079fafea1c55116464b23e1c2fa69e9"} Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.874727 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.881855 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"fb8472fa-9a35-4787-b38c-0c657881d910","Type":"ContainerStarted","Data":"3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69"} Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.882350 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.885316 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qm4tj-config-8cx8t" event={"ID":"e05598fd-7d96-4483-b94f-4bdb9511ebb5","Type":"ContainerStarted","Data":"bad48caa1251cb6a2598b5993ff76b256c0d500f40c1c3be4de4489eb6b0bc70"} Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.885348 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qm4tj-config-8cx8t" event={"ID":"e05598fd-7d96-4483-b94f-4bdb9511ebb5","Type":"ContainerStarted","Data":"59e75dedf5f2097bdc87c0e0c91a1f9ee8e229b46af14a329233aec9a3908f52"} Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.893156 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"af143054-b9a1-432a-a0f8-9f489550bd24","Type":"ContainerStarted","Data":"7fe476b866813afe8c14c393885c5c7f6a9df092d39f3acccf375873e74f50d1"} Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.894426 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.907461 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56f017-0f5c-4eb2-b3be-44db75365483","Type":"ContainerStarted","Data":"5dd98a3beb0b5724528590525fc724cb23a883a2bb2572cc1b74d820f698143d"} Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.908290 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.916600 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.079224029 podStartE2EDuration="1m21.916577109s" podCreationTimestamp="2025-11-24 21:25:54 +0000 UTC" firstStartedPulling="2025-11-24 21:25:56.156358872 +0000 UTC m=+1128.238945542" lastFinishedPulling="2025-11-24 21:26:39.993711952 +0000 UTC m=+1172.076298622" observedRunningTime="2025-11-24 21:27:15.906654617 +0000 UTC m=+1207.989241297" watchObservedRunningTime="2025-11-24 21:27:15.916577109 +0000 UTC m=+1207.999163779" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.937210 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.525919979 podStartE2EDuration="1m22.937185144s" podCreationTimestamp="2025-11-24 21:25:53 +0000 UTC" firstStartedPulling="2025-11-24 21:25:55.682895495 +0000 UTC m=+1127.765482165" lastFinishedPulling="2025-11-24 21:26:40.09416066 +0000 UTC m=+1172.176747330" observedRunningTime="2025-11-24 21:27:15.932425655 +0000 UTC m=+1208.015012345" watchObservedRunningTime="2025-11-24 21:27:15.937185144 +0000 UTC m=+1208.019771814" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.966177 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qm4tj-config-8cx8t" podStartSLOduration=1.96610088 podStartE2EDuration="1.96610088s" podCreationTimestamp="2025-11-24 21:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:15.954966922 +0000 UTC m=+1208.037553592" watchObservedRunningTime="2025-11-24 21:27:15.96610088 +0000 UTC m=+1208.048687550" Nov 24 21:27:15 crc kubenswrapper[4801]: I1124 21:27:15.991813 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.638796445 podStartE2EDuration="1m22.991786755s" podCreationTimestamp="2025-11-24 21:25:53 +0000 UTC" firstStartedPulling="2025-11-24 21:25:55.902691122 +0000 UTC m=+1127.985277792" lastFinishedPulling="2025-11-24 21:26:40.255681432 +0000 UTC m=+1172.338268102" observedRunningTime="2025-11-24 21:27:15.97631107 +0000 UTC m=+1208.058897750" watchObservedRunningTime="2025-11-24 21:27:15.991786755 +0000 UTC m=+1208.074373435" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.019477 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.057859088 podStartE2EDuration="1m23.019453112s" podCreationTimestamp="2025-11-24 21:25:53 +0000 UTC" firstStartedPulling="2025-11-24 21:25:56.028037821 +0000 UTC m=+1128.110624491" lastFinishedPulling="2025-11-24 21:26:39.989631845 +0000 UTC m=+1172.072218515" observedRunningTime="2025-11-24 21:27:16.004934787 +0000 UTC m=+1208.087521477" watchObservedRunningTime="2025-11-24 21:27:16.019453112 +0000 UTC m=+1208.102039772" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.293772 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5"] Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.295486 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.327612 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5"] Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.414436 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw27s\" (UniqueName: \"kubernetes.io/projected/339630dc-e6c1-4471-bb45-a0985b4097ba-kube-api-access-hw27s\") pod \"mysqld-exporter-openstack-cell1-db-create-xsdv5\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.414727 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339630dc-e6c1-4471-bb45-a0985b4097ba-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xsdv5\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.514383 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0fd0-account-create-jpd6q"] Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.515890 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.517609 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339630dc-e6c1-4471-bb45-a0985b4097ba-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xsdv5\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.517691 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw27s\" (UniqueName: \"kubernetes.io/projected/339630dc-e6c1-4471-bb45-a0985b4097ba-kube-api-access-hw27s\") pod \"mysqld-exporter-openstack-cell1-db-create-xsdv5\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.518486 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339630dc-e6c1-4471-bb45-a0985b4097ba-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xsdv5\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.523070 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.531562 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0fd0-account-create-jpd6q"] Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.577260 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw27s\" (UniqueName: \"kubernetes.io/projected/339630dc-e6c1-4471-bb45-a0985b4097ba-kube-api-access-hw27s\") pod \"mysqld-exporter-openstack-cell1-db-create-xsdv5\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.620229 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njg8\" (UniqueName: \"kubernetes.io/projected/535abb13-1113-478c-af82-661e1a06f21e-kube-api-access-7njg8\") pod \"mysqld-exporter-0fd0-account-create-jpd6q\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.620422 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535abb13-1113-478c-af82-661e1a06f21e-operator-scripts\") pod \"mysqld-exporter-0fd0-account-create-jpd6q\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.632167 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.727829 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535abb13-1113-478c-af82-661e1a06f21e-operator-scripts\") pod \"mysqld-exporter-0fd0-account-create-jpd6q\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.728007 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njg8\" (UniqueName: \"kubernetes.io/projected/535abb13-1113-478c-af82-661e1a06f21e-kube-api-access-7njg8\") pod \"mysqld-exporter-0fd0-account-create-jpd6q\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.729324 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535abb13-1113-478c-af82-661e1a06f21e-operator-scripts\") pod \"mysqld-exporter-0fd0-account-create-jpd6q\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.758110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njg8\" (UniqueName: \"kubernetes.io/projected/535abb13-1113-478c-af82-661e1a06f21e-kube-api-access-7njg8\") pod \"mysqld-exporter-0fd0-account-create-jpd6q\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.836030 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.926754 4801 generic.go:334] "Generic (PLEG): container finished" podID="e05598fd-7d96-4483-b94f-4bdb9511ebb5" containerID="bad48caa1251cb6a2598b5993ff76b256c0d500f40c1c3be4de4489eb6b0bc70" exitCode=0 Nov 24 21:27:16 crc kubenswrapper[4801]: I1124 21:27:16.927289 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qm4tj-config-8cx8t" event={"ID":"e05598fd-7d96-4483-b94f-4bdb9511ebb5","Type":"ContainerDied","Data":"bad48caa1251cb6a2598b5993ff76b256c0d500f40c1c3be4de4489eb6b0bc70"} Nov 24 21:27:17 crc kubenswrapper[4801]: I1124 21:27:17.467193 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5"] Nov 24 21:27:17 crc kubenswrapper[4801]: W1124 21:27:17.486904 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod339630dc_e6c1_4471_bb45_a0985b4097ba.slice/crio-2202b3b9f6748541752edb768aeb15839f216ecb52cf3a052834788be841fecc WatchSource:0}: Error finding container 2202b3b9f6748541752edb768aeb15839f216ecb52cf3a052834788be841fecc: Status 404 returned error can't find the container with id 2202b3b9f6748541752edb768aeb15839f216ecb52cf3a052834788be841fecc Nov 24 21:27:17 crc kubenswrapper[4801]: I1124 21:27:17.782466 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0fd0-account-create-jpd6q"] Nov 24 21:27:17 crc kubenswrapper[4801]: I1124 21:27:17.968286 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" event={"ID":"339630dc-e6c1-4471-bb45-a0985b4097ba","Type":"ContainerStarted","Data":"014caa51b3ffe3a7daf063907eb8bc9137ccb49012ecf98543cb3c5bca92b96c"} Nov 24 21:27:17 crc kubenswrapper[4801]: I1124 21:27:17.968748 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" event={"ID":"339630dc-e6c1-4471-bb45-a0985b4097ba","Type":"ContainerStarted","Data":"2202b3b9f6748541752edb768aeb15839f216ecb52cf3a052834788be841fecc"} Nov 24 21:27:17 crc kubenswrapper[4801]: I1124 21:27:17.973780 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" event={"ID":"535abb13-1113-478c-af82-661e1a06f21e","Type":"ContainerStarted","Data":"3564d6e98e8447aca46993f660058c73acc5ac7967ccc5bb599a092e3fc7ef6e"} Nov 24 21:27:17 crc kubenswrapper[4801]: I1124 21:27:17.995628 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" podStartSLOduration=1.995604266 podStartE2EDuration="1.995604266s" podCreationTimestamp="2025-11-24 21:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:17.990417934 +0000 UTC m=+1210.073004614" watchObservedRunningTime="2025-11-24 21:27:17.995604266 +0000 UTC m=+1210.078190926" Nov 24 21:27:18 crc kubenswrapper[4801]: I1124 21:27:18.794827 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qm4tj" Nov 24 21:27:18 crc kubenswrapper[4801]: I1124 21:27:18.995112 4801 generic.go:334] "Generic (PLEG): container finished" podID="339630dc-e6c1-4471-bb45-a0985b4097ba" containerID="014caa51b3ffe3a7daf063907eb8bc9137ccb49012ecf98543cb3c5bca92b96c" exitCode=0 Nov 24 21:27:18 crc kubenswrapper[4801]: I1124 21:27:18.995189 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" event={"ID":"339630dc-e6c1-4471-bb45-a0985b4097ba","Type":"ContainerDied","Data":"014caa51b3ffe3a7daf063907eb8bc9137ccb49012ecf98543cb3c5bca92b96c"} Nov 24 21:27:19 crc kubenswrapper[4801]: I1124 21:27:19.007229 4801 generic.go:334] "Generic (PLEG): container finished" podID="535abb13-1113-478c-af82-661e1a06f21e" containerID="df99caeb91e68d689662be32507075ef2e159e79137b32be319b65e8c37034a6" exitCode=0 Nov 24 21:27:19 crc kubenswrapper[4801]: I1124 21:27:19.007292 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" event={"ID":"535abb13-1113-478c-af82-661e1a06f21e","Type":"ContainerDied","Data":"df99caeb91e68d689662be32507075ef2e159e79137b32be319b65e8c37034a6"} Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.019880 4801 generic.go:334] "Generic (PLEG): container finished" podID="4add1738-d33e-4fc5-aaaf-ae28dcd88220" containerID="3649f6f88aa815d817ebf731f09816ac43862497b5593870c562acf8f7787439" exitCode=0 Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.020220 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbr7q" event={"ID":"4add1738-d33e-4fc5-aaaf-ae28dcd88220","Type":"ContainerDied","Data":"3649f6f88aa815d817ebf731f09816ac43862497b5593870c562acf8f7787439"} Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.139889 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225163 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run-ovn\") pod \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225320 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e05598fd-7d96-4483-b94f-4bdb9511ebb5" (UID: "e05598fd-7d96-4483-b94f-4bdb9511ebb5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225562 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-log-ovn\") pod \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225598 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e05598fd-7d96-4483-b94f-4bdb9511ebb5" (UID: "e05598fd-7d96-4483-b94f-4bdb9511ebb5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225632 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-additional-scripts\") pod \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225673 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m69cd\" (UniqueName: \"kubernetes.io/projected/e05598fd-7d96-4483-b94f-4bdb9511ebb5-kube-api-access-m69cd\") pod \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.226732 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e05598fd-7d96-4483-b94f-4bdb9511ebb5" (UID: "e05598fd-7d96-4483-b94f-4bdb9511ebb5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.226987 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-scripts" (OuterVolumeSpecName: "scripts") pod "e05598fd-7d96-4483-b94f-4bdb9511ebb5" (UID: "e05598fd-7d96-4483-b94f-4bdb9511ebb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.225766 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-scripts\") pod \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.227349 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run\") pod \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\" (UID: \"e05598fd-7d96-4483-b94f-4bdb9511ebb5\") " Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.227399 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run" (OuterVolumeSpecName: "var-run") pod "e05598fd-7d96-4483-b94f-4bdb9511ebb5" (UID: "e05598fd-7d96-4483-b94f-4bdb9511ebb5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.228093 4801 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.228113 4801 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.228125 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e05598fd-7d96-4483-b94f-4bdb9511ebb5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.228136 4801 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.228146 4801 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e05598fd-7d96-4483-b94f-4bdb9511ebb5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.255866 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05598fd-7d96-4483-b94f-4bdb9511ebb5-kube-api-access-m69cd" (OuterVolumeSpecName: "kube-api-access-m69cd") pod "e05598fd-7d96-4483-b94f-4bdb9511ebb5" (UID: "e05598fd-7d96-4483-b94f-4bdb9511ebb5"). InnerVolumeSpecName "kube-api-access-m69cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:20 crc kubenswrapper[4801]: I1124 21:27:20.330486 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m69cd\" (UniqueName: \"kubernetes.io/projected/e05598fd-7d96-4483-b94f-4bdb9511ebb5-kube-api-access-m69cd\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:21 crc kubenswrapper[4801]: I1124 21:27:21.042077 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qm4tj-config-8cx8t" Nov 24 21:27:21 crc kubenswrapper[4801]: I1124 21:27:21.042221 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qm4tj-config-8cx8t" event={"ID":"e05598fd-7d96-4483-b94f-4bdb9511ebb5","Type":"ContainerDied","Data":"59e75dedf5f2097bdc87c0e0c91a1f9ee8e229b46af14a329233aec9a3908f52"} Nov 24 21:27:21 crc kubenswrapper[4801]: I1124 21:27:21.044083 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e75dedf5f2097bdc87c0e0c91a1f9ee8e229b46af14a329233aec9a3908f52" Nov 24 21:27:21 crc kubenswrapper[4801]: I1124 21:27:21.350687 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qm4tj-config-8cx8t"] Nov 24 21:27:21 crc kubenswrapper[4801]: I1124 21:27:21.371458 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qm4tj-config-8cx8t"] Nov 24 21:27:22 crc kubenswrapper[4801]: I1124 21:27:22.678765 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05598fd-7d96-4483-b94f-4bdb9511ebb5" path="/var/lib/kubelet/pods/e05598fd-7d96-4483-b94f-4bdb9511ebb5/volumes" Nov 24 21:27:24 crc kubenswrapper[4801]: I1124 21:27:24.671418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:27:24 crc kubenswrapper[4801]: I1124 21:27:24.681924 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba1dd9d3-072d-4cc1-b164-9701cb421564-etc-swift\") pod \"swift-storage-0\" (UID: \"ba1dd9d3-072d-4cc1-b164-9701cb421564\") " pod="openstack/swift-storage-0" Nov 24 21:27:24 crc kubenswrapper[4801]: I1124 21:27:24.865000 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 21:27:25 crc kubenswrapper[4801]: I1124 21:27:25.048697 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Nov 24 21:27:25 crc kubenswrapper[4801]: I1124 21:27:25.063518 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Nov 24 21:27:25 crc kubenswrapper[4801]: I1124 21:27:25.079674 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Nov 24 21:27:25 crc kubenswrapper[4801]: I1124 21:27:25.449692 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Nov 24 21:27:29 crc kubenswrapper[4801]: E1124 21:27:29.956991 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 24 21:27:29 crc kubenswrapper[4801]: E1124 21:27:29.958081 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-m4tng_openstack(b7d70dfa-a9e5-417b-9506-95bd490da3ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:27:29 crc kubenswrapper[4801]: E1124 21:27:29.959262 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-m4tng" podUID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.170574 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.183957 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.188030 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" event={"ID":"535abb13-1113-478c-af82-661e1a06f21e","Type":"ContainerDied","Data":"3564d6e98e8447aca46993f660058c73acc5ac7967ccc5bb599a092e3fc7ef6e"} Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.188087 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3564d6e98e8447aca46993f660058c73acc5ac7967ccc5bb599a092e3fc7ef6e" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.188177 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0fd0-account-create-jpd6q" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.194715 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rbr7q" event={"ID":"4add1738-d33e-4fc5-aaaf-ae28dcd88220","Type":"ContainerDied","Data":"a9c9f06f43db12ec95f6119c4044cf6ff694c0515dc8f54466caa6add05003d8"} Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.194806 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rbr7q" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.195463 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c9f06f43db12ec95f6119c4044cf6ff694c0515dc8f54466caa6add05003d8" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.198149 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" event={"ID":"339630dc-e6c1-4471-bb45-a0985b4097ba","Type":"ContainerDied","Data":"2202b3b9f6748541752edb768aeb15839f216ecb52cf3a052834788be841fecc"} Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.198221 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2202b3b9f6748541752edb768aeb15839f216ecb52cf3a052834788be841fecc" Nov 24 21:27:30 crc kubenswrapper[4801]: E1124 21:27:30.206408 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-m4tng" podUID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.212078 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.225542 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-scripts\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.226052 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-combined-ca-bundle\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.226212 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2nk\" (UniqueName: \"kubernetes.io/projected/4add1738-d33e-4fc5-aaaf-ae28dcd88220-kube-api-access-vt2nk\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.226396 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-swiftconf\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.226588 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-dispersionconf\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.226741 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4add1738-d33e-4fc5-aaaf-ae28dcd88220-etc-swift\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.226913 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-ring-data-devices\") pod \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\" (UID: \"4add1738-d33e-4fc5-aaaf-ae28dcd88220\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.228700 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4add1738-d33e-4fc5-aaaf-ae28dcd88220-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.228962 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.249711 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4add1738-d33e-4fc5-aaaf-ae28dcd88220-kube-api-access-vt2nk" (OuterVolumeSpecName: "kube-api-access-vt2nk") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "kube-api-access-vt2nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.274359 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.300494 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.302614 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.308601 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-scripts" (OuterVolumeSpecName: "scripts") pod "4add1738-d33e-4fc5-aaaf-ae28dcd88220" (UID: "4add1738-d33e-4fc5-aaaf-ae28dcd88220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.329662 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw27s\" (UniqueName: \"kubernetes.io/projected/339630dc-e6c1-4471-bb45-a0985b4097ba-kube-api-access-hw27s\") pod \"339630dc-e6c1-4471-bb45-a0985b4097ba\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.329757 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535abb13-1113-478c-af82-661e1a06f21e-operator-scripts\") pod \"535abb13-1113-478c-af82-661e1a06f21e\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.329926 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339630dc-e6c1-4471-bb45-a0985b4097ba-operator-scripts\") pod \"339630dc-e6c1-4471-bb45-a0985b4097ba\" (UID: \"339630dc-e6c1-4471-bb45-a0985b4097ba\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.330008 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7njg8\" (UniqueName: \"kubernetes.io/projected/535abb13-1113-478c-af82-661e1a06f21e-kube-api-access-7njg8\") pod \"535abb13-1113-478c-af82-661e1a06f21e\" (UID: \"535abb13-1113-478c-af82-661e1a06f21e\") " Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.330457 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/535abb13-1113-478c-af82-661e1a06f21e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "535abb13-1113-478c-af82-661e1a06f21e" (UID: "535abb13-1113-478c-af82-661e1a06f21e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.330859 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339630dc-e6c1-4471-bb45-a0985b4097ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "339630dc-e6c1-4471-bb45-a0985b4097ba" (UID: "339630dc-e6c1-4471-bb45-a0985b4097ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335242 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339630dc-e6c1-4471-bb45-a0985b4097ba-kube-api-access-hw27s" (OuterVolumeSpecName: "kube-api-access-hw27s") pod "339630dc-e6c1-4471-bb45-a0985b4097ba" (UID: "339630dc-e6c1-4471-bb45-a0985b4097ba"). InnerVolumeSpecName "kube-api-access-hw27s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335695 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535abb13-1113-478c-af82-661e1a06f21e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335720 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335733 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335746 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt2nk\" (UniqueName: \"kubernetes.io/projected/4add1738-d33e-4fc5-aaaf-ae28dcd88220-kube-api-access-vt2nk\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335761 4801 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335770 4801 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4add1738-d33e-4fc5-aaaf-ae28dcd88220-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335781 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339630dc-e6c1-4471-bb45-a0985b4097ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335790 4801 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4add1738-d33e-4fc5-aaaf-ae28dcd88220-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335801 4801 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4add1738-d33e-4fc5-aaaf-ae28dcd88220-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.335810 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw27s\" (UniqueName: \"kubernetes.io/projected/339630dc-e6c1-4471-bb45-a0985b4097ba-kube-api-access-hw27s\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.337278 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535abb13-1113-478c-af82-661e1a06f21e-kube-api-access-7njg8" (OuterVolumeSpecName: "kube-api-access-7njg8") pod "535abb13-1113-478c-af82-661e1a06f21e" (UID: "535abb13-1113-478c-af82-661e1a06f21e"). InnerVolumeSpecName "kube-api-access-7njg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.439005 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7njg8\" (UniqueName: \"kubernetes.io/projected/535abb13-1113-478c-af82-661e1a06f21e-kube-api-access-7njg8\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:30 crc kubenswrapper[4801]: W1124 21:27:30.655501 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba1dd9d3_072d_4cc1_b164_9701cb421564.slice/crio-579245c3bfe1a9a3ba7d68d4330287956066cb5c5a3ba32116539e28f5e71ff9 WatchSource:0}: Error finding container 579245c3bfe1a9a3ba7d68d4330287956066cb5c5a3ba32116539e28f5e71ff9: Status 404 returned error can't find the container with id 579245c3bfe1a9a3ba7d68d4330287956066cb5c5a3ba32116539e28f5e71ff9 Nov 24 21:27:30 crc kubenswrapper[4801]: I1124 21:27:30.683881 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.218219 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"579245c3bfe1a9a3ba7d68d4330287956066cb5c5a3ba32116539e28f5e71ff9"} Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.230991 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.231110 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerStarted","Data":"e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a"} Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.284240 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.058386052 podStartE2EDuration="1m30.284215983s" podCreationTimestamp="2025-11-24 21:26:01 +0000 UTC" firstStartedPulling="2025-11-24 21:26:03.778667321 +0000 UTC m=+1135.861253991" lastFinishedPulling="2025-11-24 21:27:30.004497262 +0000 UTC m=+1222.087083922" observedRunningTime="2025-11-24 21:27:31.265095164 +0000 UTC m=+1223.347681854" watchObservedRunningTime="2025-11-24 21:27:31.284215983 +0000 UTC m=+1223.366802653" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.709245 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:27:31 crc kubenswrapper[4801]: E1124 21:27:31.709769 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4add1738-d33e-4fc5-aaaf-ae28dcd88220" containerName="swift-ring-rebalance" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.709787 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4add1738-d33e-4fc5-aaaf-ae28dcd88220" containerName="swift-ring-rebalance" Nov 24 21:27:31 crc kubenswrapper[4801]: E1124 21:27:31.709810 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339630dc-e6c1-4471-bb45-a0985b4097ba" containerName="mariadb-database-create" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.709816 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="339630dc-e6c1-4471-bb45-a0985b4097ba" containerName="mariadb-database-create" Nov 24 21:27:31 crc kubenswrapper[4801]: E1124 21:27:31.709828 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535abb13-1113-478c-af82-661e1a06f21e" containerName="mariadb-account-create" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.709836 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="535abb13-1113-478c-af82-661e1a06f21e" containerName="mariadb-account-create" Nov 24 21:27:31 crc kubenswrapper[4801]: E1124 21:27:31.710359 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05598fd-7d96-4483-b94f-4bdb9511ebb5" containerName="ovn-config" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.710382 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05598fd-7d96-4483-b94f-4bdb9511ebb5" containerName="ovn-config" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.710600 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05598fd-7d96-4483-b94f-4bdb9511ebb5" containerName="ovn-config" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.710615 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4add1738-d33e-4fc5-aaaf-ae28dcd88220" containerName="swift-ring-rebalance" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.710631 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="535abb13-1113-478c-af82-661e1a06f21e" containerName="mariadb-account-create" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.710654 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="339630dc-e6c1-4471-bb45-a0985b4097ba" containerName="mariadb-database-create" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.711467 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.719805 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.740843 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.782263 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9q2c\" (UniqueName: \"kubernetes.io/projected/44202548-736a-47cc-93f0-622fca103c29-kube-api-access-q9q2c\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.782968 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-config-data\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.783068 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.887422 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-config-data\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.887546 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.887591 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9q2c\" (UniqueName: \"kubernetes.io/projected/44202548-736a-47cc-93f0-622fca103c29-kube-api-access-q9q2c\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.900826 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.908062 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-config-data\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:31 crc kubenswrapper[4801]: I1124 21:27:31.916526 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9q2c\" (UniqueName: \"kubernetes.io/projected/44202548-736a-47cc-93f0-622fca103c29-kube-api-access-q9q2c\") pod \"mysqld-exporter-0\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " pod="openstack/mysqld-exporter-0" Nov 24 21:27:32 crc kubenswrapper[4801]: I1124 21:27:32.034823 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:27:32 crc kubenswrapper[4801]: I1124 21:27:32.449890 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:32 crc kubenswrapper[4801]: I1124 21:27:32.450307 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:32 crc kubenswrapper[4801]: I1124 21:27:32.492245 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:32 crc kubenswrapper[4801]: I1124 21:27:32.679729 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:27:33 crc kubenswrapper[4801]: I1124 21:27:33.265764 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"44202548-736a-47cc-93f0-622fca103c29","Type":"ContainerStarted","Data":"bfb5b211d7f7982d45f1166b8b31a37432d7480f1f2ef38c88c12206a3bb22f9"} Nov 24 21:27:33 crc kubenswrapper[4801]: I1124 21:27:33.269284 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"d9df921d04d80dce478692dfa92ba8c937bb5c87f5cc06598268af262f78f869"} Nov 24 21:27:33 crc kubenswrapper[4801]: I1124 21:27:33.269320 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"6d896d3831daf31d98ae7faae7178960afc9122421fb0f23070d23d773b66a17"} Nov 24 21:27:33 crc kubenswrapper[4801]: I1124 21:27:33.269334 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"b1be3e2c8754fdc75f8c27c9aa0cc2e0aec01041de9ac3d343591910a915d75c"} Nov 24 21:27:33 crc kubenswrapper[4801]: I1124 21:27:33.274398 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:34 crc kubenswrapper[4801]: I1124 21:27:34.287621 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"fbceb4ab1cc8501c7299e51d9b898759f8cde7d4fd800139aa5e24909cfd9f4a"} Nov 24 21:27:35 crc kubenswrapper[4801]: I1124 21:27:35.045101 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Nov 24 21:27:35 crc kubenswrapper[4801]: I1124 21:27:35.063734 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 21:27:35 crc kubenswrapper[4801]: I1124 21:27:35.078565 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Nov 24 21:27:35 crc kubenswrapper[4801]: I1124 21:27:35.299456 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"44202548-736a-47cc-93f0-622fca103c29","Type":"ContainerStarted","Data":"b80e967374077c3a72433b7e49bbdb652565636392607e2604f0aa2f0b071d47"} Nov 24 21:27:35 crc kubenswrapper[4801]: I1124 21:27:35.339414 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.658984758 podStartE2EDuration="4.339390965s" podCreationTimestamp="2025-11-24 21:27:31 +0000 UTC" firstStartedPulling="2025-11-24 21:27:32.690796319 +0000 UTC m=+1224.773382989" lastFinishedPulling="2025-11-24 21:27:34.371202526 +0000 UTC m=+1226.453789196" observedRunningTime="2025-11-24 21:27:35.332957741 +0000 UTC m=+1227.415544411" watchObservedRunningTime="2025-11-24 21:27:35.339390965 +0000 UTC m=+1227.421977635" Nov 24 21:27:35 crc kubenswrapper[4801]: I1124 21:27:35.462633 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:27:36 crc kubenswrapper[4801]: I1124 21:27:36.316187 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"74eea2a1060fe7a745ba565f707d2226f1612941ac476283ff2072b9dda184ac"} Nov 24 21:27:36 crc kubenswrapper[4801]: I1124 21:27:36.316647 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"f3773b3387b87470ee0723fbf4116ad638bcdc0ec340ac17da5dbc2633b11e3b"} Nov 24 21:27:36 crc kubenswrapper[4801]: I1124 21:27:36.316672 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"30356a467ff0833b688b47f7016c27765ac2bd824b979a7641e99c7991e11bbb"} Nov 24 21:27:36 crc kubenswrapper[4801]: I1124 21:27:36.316684 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"034cdddbef261464ddf1bf6f5422584f3d5edfd874d8eda08381f8cfd353a2c6"} Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.173260 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.173745 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="thanos-sidecar" containerID="cri-o://e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a" gracePeriod=600 Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.173758 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="config-reloader" containerID="cri-o://a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8" gracePeriod=600 Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.173587 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="prometheus" containerID="cri-o://58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27" gracePeriod=600 Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.330322 4801 generic.go:334] "Generic (PLEG): container finished" podID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerID="e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a" exitCode=0 Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.332002 4801 generic.go:334] "Generic (PLEG): container finished" podID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerID="58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27" exitCode=0 Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.332042 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerDied","Data":"e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a"} Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.332201 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerDied","Data":"58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27"} Nov 24 21:27:37 crc kubenswrapper[4801]: I1124 21:27:37.450233 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.142:9090/-/ready\": dial tcp 10.217.0.142:9090: connect: connection refused" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.176543 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.298587 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-web-config\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299071 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299303 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299454 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config-out\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299503 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbadfbdd-010e-4ce4-bc42-8871dc88b990-prometheus-metric-storage-rulefiles-0\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk7vw\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-kube-api-access-qk7vw\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299589 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-tls-assets\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.299663 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-thanos-prometheus-http-client-file\") pod \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\" (UID: \"fbadfbdd-010e-4ce4-bc42-8871dc88b990\") " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.312808 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.312829 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbadfbdd-010e-4ce4-bc42-8871dc88b990-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.317059 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.320498 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config-out" (OuterVolumeSpecName: "config-out") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.321817 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-kube-api-access-qk7vw" (OuterVolumeSpecName: "kube-api-access-qk7vw") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "kube-api-access-qk7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.324907 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config" (OuterVolumeSpecName: "config") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.346421 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-web-config" (OuterVolumeSpecName: "web-config") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.350932 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fbadfbdd-010e-4ce4-bc42-8871dc88b990" (UID: "fbadfbdd-010e-4ce4-bc42-8871dc88b990"). InnerVolumeSpecName "pvc-f72c2502-2c46-4819-8eae-028b996ef754". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.376498 4801 generic.go:334] "Generic (PLEG): container finished" podID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerID="a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8" exitCode=0 Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.376581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerDied","Data":"a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8"} Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.376619 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fbadfbdd-010e-4ce4-bc42-8871dc88b990","Type":"ContainerDied","Data":"6e004288a626364ed7e5756ed186872b82cbf05a89202506f1dc681149b9e8ef"} Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.376643 4801 scope.go:117] "RemoveContainer" containerID="e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.376828 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.402758 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"8de487b32121f38985c751e38808eba7bfd93b550d91497bc723a825d87f8de1"} Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.402818 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"29e691239ef4ad595629f8b6ec35d7e873d6b44141407d93f6b62db3d788515d"} Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.402831 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"807c7b754b1406ac9ef72d504f2fe0f0678e9c1f7be03b66cd5246228e145959"} Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.402842 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"d7643d57bf26d710d01d32955a746146e418d96ae9b929501588bef72ee21dc5"} Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404022 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404064 4801 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fbadfbdd-010e-4ce4-bc42-8871dc88b990-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404079 4801 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fbadfbdd-010e-4ce4-bc42-8871dc88b990-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404097 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk7vw\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-kube-api-access-qk7vw\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404109 4801 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fbadfbdd-010e-4ce4-bc42-8871dc88b990-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404118 4801 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404128 4801 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fbadfbdd-010e-4ce4-bc42-8871dc88b990-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.404168 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") on node \"crc\" " Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.437908 4801 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.438133 4801 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f72c2502-2c46-4819-8eae-028b996ef754" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754") on node "crc" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.505964 4801 reconciler_common.go:293] "Volume detached for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.525338 4801 scope.go:117] "RemoveContainer" containerID="a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.550926 4801 scope.go:117] "RemoveContainer" containerID="58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.559792 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.569529 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.589415 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.589886 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="thanos-sidecar" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.589904 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="thanos-sidecar" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.589929 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="config-reloader" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.589937 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="config-reloader" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.589956 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="init-config-reloader" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.589964 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="init-config-reloader" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.589982 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="prometheus" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.589988 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="prometheus" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.590192 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="config-reloader" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.590211 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="prometheus" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.590224 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" containerName="thanos-sidecar" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.592156 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.593096 4801 scope.go:117] "RemoveContainer" containerID="00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.595770 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-677dx" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.596064 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.596217 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.596406 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.597951 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.609817 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.618064 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.627818 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.658393 4801 scope.go:117] "RemoveContainer" containerID="e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.672206 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a\": container with ID starting with e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a not found: ID does not exist" containerID="e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.672261 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a"} err="failed to get container status \"e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a\": rpc error: code = NotFound desc = could not find container \"e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a\": container with ID starting with e4f0bf615ccedf85274555ee94d1ef7521e80b8e078498ebfd3dbb39615b7e9a not found: ID does not exist" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.672294 4801 scope.go:117] "RemoveContainer" containerID="a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.698381 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8\": container with ID starting with a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8 not found: ID does not exist" containerID="a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.698441 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8"} err="failed to get container status \"a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8\": rpc error: code = NotFound desc = could not find container \"a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8\": container with ID starting with a437e808989d49819edd95b367179297c8c7eb8cf6ff2cc31d927dcc193f9de8 not found: ID does not exist" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.698530 4801 scope.go:117] "RemoveContainer" containerID="58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.704608 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27\": container with ID starting with 58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27 not found: ID does not exist" containerID="58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.704681 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27"} err="failed to get container status \"58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27\": rpc error: code = NotFound desc = could not find container \"58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27\": container with ID starting with 58dd1d57e4a83374568564e72184a2b4f33cb1fbccc50292f68a30e213ca3f27 not found: ID does not exist" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.704716 4801 scope.go:117] "RemoveContainer" containerID="00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266" Nov 24 21:27:38 crc kubenswrapper[4801]: E1124 21:27:38.708940 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266\": container with ID starting with 00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266 not found: ID does not exist" containerID="00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.711371 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266"} err="failed to get container status \"00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266\": rpc error: code = NotFound desc = could not find container \"00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266\": container with ID starting with 00416f58baf4e8bf9cba62075241861c702c7ab15073a6c9a6d4f0f64d6e1266 not found: ID does not exist" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712228 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712299 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712325 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712463 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712495 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712578 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-config\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712649 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712678 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712730 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.712768 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrvwz\" (UniqueName: \"kubernetes.io/projected/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-kube-api-access-jrvwz\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.739546 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbadfbdd-010e-4ce4-bc42-8871dc88b990" path="/var/lib/kubelet/pods/fbadfbdd-010e-4ce4-bc42-8871dc88b990/volumes" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819463 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819537 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819600 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819621 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819676 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819738 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-config\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819784 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819806 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819844 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.819868 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrvwz\" (UniqueName: \"kubernetes.io/projected/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-kube-api-access-jrvwz\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.824139 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.830611 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.830667 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac9cfb283f52eb6a18303fe393c9cece8e9a2d177b89c99352e5073eb15e7e54/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.849573 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.849643 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.863382 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.863517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.865810 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.866631 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.867845 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.869915 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrvwz\" (UniqueName: \"kubernetes.io/projected/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-kube-api-access-jrvwz\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.870148 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4b29e97-8d63-4b0d-8df1-9e832ecfaf17-config\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:38 crc kubenswrapper[4801]: I1124 21:27:38.915968 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f72c2502-2c46-4819-8eae-028b996ef754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f72c2502-2c46-4819-8eae-028b996ef754\") pod \"prometheus-metric-storage-0\" (UID: \"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:39 crc kubenswrapper[4801]: I1124 21:27:39.211257 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:39 crc kubenswrapper[4801]: I1124 21:27:39.449807 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"6067b6b57751c6675b63d416a87d26d84f3f9b9f0cd35fff97cb6aa7a6a70cd6"} Nov 24 21:27:39 crc kubenswrapper[4801]: I1124 21:27:39.449913 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"0a3472b15edd77d181c0da2808b455592b5f5b674efd53f949c3cd3a2228e4f6"} Nov 24 21:27:39 crc kubenswrapper[4801]: I1124 21:27:39.744895 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:27:39 crc kubenswrapper[4801]: W1124 21:27:39.753658 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b29e97_8d63_4b0d_8df1_9e832ecfaf17.slice/crio-af6ca746a46bfaeca3a742bdf540bc26234313f3609198ace1a5ec54b7b5749b WatchSource:0}: Error finding container af6ca746a46bfaeca3a742bdf540bc26234313f3609198ace1a5ec54b7b5749b: Status 404 returned error can't find the container with id af6ca746a46bfaeca3a742bdf540bc26234313f3609198ace1a5ec54b7b5749b Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.474402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ba1dd9d3-072d-4cc1-b164-9701cb421564","Type":"ContainerStarted","Data":"edab6aba9c1c6dc3203fdcf15e8dbe325bac5a90d858b31da315388628385ffa"} Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.476135 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17","Type":"ContainerStarted","Data":"af6ca746a46bfaeca3a742bdf540bc26234313f3609198ace1a5ec54b7b5749b"} Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.534782 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.982307997 podStartE2EDuration="49.534761655s" podCreationTimestamp="2025-11-24 21:26:51 +0000 UTC" firstStartedPulling="2025-11-24 21:27:30.65993315 +0000 UTC m=+1222.742519820" lastFinishedPulling="2025-11-24 21:27:37.212386808 +0000 UTC m=+1229.294973478" observedRunningTime="2025-11-24 21:27:40.529210538 +0000 UTC m=+1232.611797218" watchObservedRunningTime="2025-11-24 21:27:40.534761655 +0000 UTC m=+1232.617348325" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.813991 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-k8pj7"] Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.816601 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.819131 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.839200 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-k8pj7"] Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.879429 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.879534 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.879585 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-config\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.879630 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gz5c\" (UniqueName: \"kubernetes.io/projected/47dfc1ea-2614-472f-8374-4d4955a197b1-kube-api-access-7gz5c\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.879683 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.879753 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.982530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.982642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.982694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-config\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.982735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gz5c\" (UniqueName: \"kubernetes.io/projected/47dfc1ea-2614-472f-8374-4d4955a197b1-kube-api-access-7gz5c\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.982791 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.982883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.984416 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.984428 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-config\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.985135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.985614 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:40 crc kubenswrapper[4801]: I1124 21:27:40.985777 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:41 crc kubenswrapper[4801]: I1124 21:27:41.012437 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gz5c\" (UniqueName: \"kubernetes.io/projected/47dfc1ea-2614-472f-8374-4d4955a197b1-kube-api-access-7gz5c\") pod \"dnsmasq-dns-764c5664d7-k8pj7\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:41 crc kubenswrapper[4801]: I1124 21:27:41.141894 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:41 crc kubenswrapper[4801]: I1124 21:27:41.863979 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-k8pj7"] Nov 24 21:27:41 crc kubenswrapper[4801]: W1124 21:27:41.868741 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47dfc1ea_2614_472f_8374_4d4955a197b1.slice/crio-b6e6b33da0eb9ff695566af44d761306a7a5c05f63f0f4c672a2f8febf18b111 WatchSource:0}: Error finding container b6e6b33da0eb9ff695566af44d761306a7a5c05f63f0f4c672a2f8febf18b111: Status 404 returned error can't find the container with id b6e6b33da0eb9ff695566af44d761306a7a5c05f63f0f4c672a2f8febf18b111 Nov 24 21:27:42 crc kubenswrapper[4801]: I1124 21:27:42.505128 4801 generic.go:334] "Generic (PLEG): container finished" podID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerID="87808e295194b592698b8d2de3982bd9d3e6a85778654da727f6264302e64bd5" exitCode=0 Nov 24 21:27:42 crc kubenswrapper[4801]: I1124 21:27:42.505812 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" event={"ID":"47dfc1ea-2614-472f-8374-4d4955a197b1","Type":"ContainerDied","Data":"87808e295194b592698b8d2de3982bd9d3e6a85778654da727f6264302e64bd5"} Nov 24 21:27:42 crc kubenswrapper[4801]: I1124 21:27:42.505851 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" event={"ID":"47dfc1ea-2614-472f-8374-4d4955a197b1","Type":"ContainerStarted","Data":"b6e6b33da0eb9ff695566af44d761306a7a5c05f63f0f4c672a2f8febf18b111"} Nov 24 21:27:43 crc kubenswrapper[4801]: I1124 21:27:43.522787 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" event={"ID":"47dfc1ea-2614-472f-8374-4d4955a197b1","Type":"ContainerStarted","Data":"24940f0b159ee1aac7122007df958bba9eea49888ac111f7b6e248f6964eac87"} Nov 24 21:27:43 crc kubenswrapper[4801]: I1124 21:27:43.523579 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:43 crc kubenswrapper[4801]: I1124 21:27:43.579415 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" podStartSLOduration=3.579385978 podStartE2EDuration="3.579385978s" podCreationTimestamp="2025-11-24 21:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:43.568400206 +0000 UTC m=+1235.650986896" watchObservedRunningTime="2025-11-24 21:27:43.579385978 +0000 UTC m=+1235.661972658" Nov 24 21:27:44 crc kubenswrapper[4801]: I1124 21:27:44.538496 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17","Type":"ContainerStarted","Data":"9b306097f579ed9be8ca9a31f0eac180f6dfd3d8d57eb7782f100148f28cd190"} Nov 24 21:27:45 crc kubenswrapper[4801]: I1124 21:27:45.049791 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Nov 24 21:27:45 crc kubenswrapper[4801]: I1124 21:27:45.079741 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Nov 24 21:27:47 crc kubenswrapper[4801]: I1124 21:27:47.606641 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m4tng" event={"ID":"b7d70dfa-a9e5-417b-9506-95bd490da3ef","Type":"ContainerStarted","Data":"a1ebcbb169350fb7019eb7660e9579fa8b4693f51417052daeb4ca7b23b233f6"} Nov 24 21:27:47 crc kubenswrapper[4801]: I1124 21:27:47.638883 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-m4tng" podStartSLOduration=7.048898521 podStartE2EDuration="43.638859685s" podCreationTimestamp="2025-11-24 21:27:04 +0000 UTC" firstStartedPulling="2025-11-24 21:27:09.877133789 +0000 UTC m=+1201.959720459" lastFinishedPulling="2025-11-24 21:27:46.467094953 +0000 UTC m=+1238.549681623" observedRunningTime="2025-11-24 21:27:47.635207345 +0000 UTC m=+1239.717794015" watchObservedRunningTime="2025-11-24 21:27:47.638859685 +0000 UTC m=+1239.721446365" Nov 24 21:27:47 crc kubenswrapper[4801]: I1124 21:27:47.954058 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-b44fx"] Nov 24 21:27:47 crc kubenswrapper[4801]: I1124 21:27:47.957410 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b44fx" Nov 24 21:27:47 crc kubenswrapper[4801]: I1124 21:27:47.972575 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-b44fx"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.004658 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e758b4dd-3072-46bb-9e96-c64dc748c8e4-operator-scripts\") pod \"heat-db-create-b44fx\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.004875 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4k4m\" (UniqueName: \"kubernetes.io/projected/e758b4dd-3072-46bb-9e96-c64dc748c8e4-kube-api-access-z4k4m\") pod \"heat-db-create-b44fx\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.054949 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cbqw2"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.056586 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.086203 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cbqw2"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.120557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4k4m\" (UniqueName: \"kubernetes.io/projected/e758b4dd-3072-46bb-9e96-c64dc748c8e4-kube-api-access-z4k4m\") pod \"heat-db-create-b44fx\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.120646 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e758b4dd-3072-46bb-9e96-c64dc748c8e4-operator-scripts\") pod \"heat-db-create-b44fx\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.120691 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-operator-scripts\") pod \"cinder-db-create-cbqw2\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.120739 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhcs\" (UniqueName: \"kubernetes.io/projected/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-kube-api-access-fqhcs\") pod \"cinder-db-create-cbqw2\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.123117 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e758b4dd-3072-46bb-9e96-c64dc748c8e4-operator-scripts\") pod \"heat-db-create-b44fx\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.189569 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e180-account-create-xzfhs"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.194076 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.198847 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.210089 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e180-account-create-xzfhs"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.220247 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4k4m\" (UniqueName: \"kubernetes.io/projected/e758b4dd-3072-46bb-9e96-c64dc748c8e4-kube-api-access-z4k4m\") pod \"heat-db-create-b44fx\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.223960 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-operator-scripts\") pod \"cinder-db-create-cbqw2\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.224012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhcs\" (UniqueName: \"kubernetes.io/projected/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-kube-api-access-fqhcs\") pod \"cinder-db-create-cbqw2\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.225187 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-operator-scripts\") pod \"cinder-db-create-cbqw2\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.275754 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhcs\" (UniqueName: \"kubernetes.io/projected/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-kube-api-access-fqhcs\") pod \"cinder-db-create-cbqw2\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.279407 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b44fx" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.326776 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06671e3a-592e-476d-bc4b-74646cfc034b-operator-scripts\") pod \"cinder-e180-account-create-xzfhs\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.326972 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qq7v\" (UniqueName: \"kubernetes.io/projected/06671e3a-592e-476d-bc4b-74646cfc034b-kube-api-access-4qq7v\") pod \"cinder-e180-account-create-xzfhs\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.333581 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8k6hl"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.340646 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.357326 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8k6hl"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.368608 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e08c-account-create-vct62"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.374512 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.380400 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.382444 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.394562 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e08c-account-create-vct62"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.428825 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzkmf\" (UniqueName: \"kubernetes.io/projected/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-kube-api-access-xzkmf\") pod \"barbican-e08c-account-create-vct62\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.428892 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qq7v\" (UniqueName: \"kubernetes.io/projected/06671e3a-592e-476d-bc4b-74646cfc034b-kube-api-access-4qq7v\") pod \"cinder-e180-account-create-xzfhs\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.429035 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88357b0-8aad-4e9a-981f-d520525938a1-operator-scripts\") pod \"barbican-db-create-8k6hl\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.429151 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-operator-scripts\") pod \"barbican-e08c-account-create-vct62\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.429236 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06671e3a-592e-476d-bc4b-74646cfc034b-operator-scripts\") pod \"cinder-e180-account-create-xzfhs\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.429260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4j5\" (UniqueName: \"kubernetes.io/projected/d88357b0-8aad-4e9a-981f-d520525938a1-kube-api-access-sd4j5\") pod \"barbican-db-create-8k6hl\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.429965 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06671e3a-592e-476d-bc4b-74646cfc034b-operator-scripts\") pod \"cinder-e180-account-create-xzfhs\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.432321 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mqjvg"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.434133 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.439978 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.440209 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.440348 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.440490 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s9sx4" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.456449 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mqjvg"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.499215 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-08fa-account-create-lpjbf"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.501147 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.507212 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.524065 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-08fa-account-create-lpjbf"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.531987 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qq7v\" (UniqueName: \"kubernetes.io/projected/06671e3a-592e-476d-bc4b-74646cfc034b-kube-api-access-4qq7v\") pod \"cinder-e180-account-create-xzfhs\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.539256 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pkrzc"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540042 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hl97\" (UniqueName: \"kubernetes.io/projected/6d62c145-e144-4f0a-8dcb-73d67b536463-kube-api-access-8hl97\") pod \"heat-08fa-account-create-lpjbf\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540160 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzkmf\" (UniqueName: \"kubernetes.io/projected/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-kube-api-access-xzkmf\") pod \"barbican-e08c-account-create-vct62\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540254 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88357b0-8aad-4e9a-981f-d520525938a1-operator-scripts\") pod \"barbican-db-create-8k6hl\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540314 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rwx\" (UniqueName: \"kubernetes.io/projected/faa8d5ae-1109-4365-8360-ad5ac4bd8198-kube-api-access-f7rwx\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540429 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-operator-scripts\") pod \"barbican-e08c-account-create-vct62\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540513 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-combined-ca-bundle\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540635 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd4j5\" (UniqueName: \"kubernetes.io/projected/d88357b0-8aad-4e9a-981f-d520525938a1-kube-api-access-sd4j5\") pod \"barbican-db-create-8k6hl\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d62c145-e144-4f0a-8dcb-73d67b536463-operator-scripts\") pod \"heat-08fa-account-create-lpjbf\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.540753 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-config-data\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.541665 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.542010 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88357b0-8aad-4e9a-981f-d520525938a1-operator-scripts\") pod \"barbican-db-create-8k6hl\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.542559 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-operator-scripts\") pod \"barbican-e08c-account-create-vct62\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.571352 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pkrzc"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.573271 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzkmf\" (UniqueName: \"kubernetes.io/projected/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-kube-api-access-xzkmf\") pod \"barbican-e08c-account-create-vct62\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.605063 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd4j5\" (UniqueName: \"kubernetes.io/projected/d88357b0-8aad-4e9a-981f-d520525938a1-kube-api-access-sd4j5\") pod \"barbican-db-create-8k6hl\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646572 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-config-data\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hl97\" (UniqueName: \"kubernetes.io/projected/6d62c145-e144-4f0a-8dcb-73d67b536463-kube-api-access-8hl97\") pod \"heat-08fa-account-create-lpjbf\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646757 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5n4\" (UniqueName: \"kubernetes.io/projected/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-kube-api-access-kn5n4\") pod \"neutron-db-create-pkrzc\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646799 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rwx\" (UniqueName: \"kubernetes.io/projected/faa8d5ae-1109-4365-8360-ad5ac4bd8198-kube-api-access-f7rwx\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646855 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-combined-ca-bundle\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646881 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-operator-scripts\") pod \"neutron-db-create-pkrzc\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.646930 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d62c145-e144-4f0a-8dcb-73d67b536463-operator-scripts\") pod \"heat-08fa-account-create-lpjbf\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.647837 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d62c145-e144-4f0a-8dcb-73d67b536463-operator-scripts\") pod \"heat-08fa-account-create-lpjbf\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.662295 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.672408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-config-data\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.676635 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-combined-ca-bundle\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.679540 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rwx\" (UniqueName: \"kubernetes.io/projected/faa8d5ae-1109-4365-8360-ad5ac4bd8198-kube-api-access-f7rwx\") pod \"keystone-db-sync-mqjvg\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.688728 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.695335 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hl97\" (UniqueName: \"kubernetes.io/projected/6d62c145-e144-4f0a-8dcb-73d67b536463-kube-api-access-8hl97\") pod \"heat-08fa-account-create-lpjbf\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.699311 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.712610 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0be8-account-create-r6lvl"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.718559 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.722715 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.751140 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-operator-scripts\") pod \"neutron-db-create-pkrzc\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.751257 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-operator-scripts\") pod \"neutron-0be8-account-create-r6lvl\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.751562 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tvrw\" (UniqueName: \"kubernetes.io/projected/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-kube-api-access-7tvrw\") pod \"neutron-0be8-account-create-r6lvl\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.751624 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn5n4\" (UniqueName: \"kubernetes.io/projected/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-kube-api-access-kn5n4\") pod \"neutron-db-create-pkrzc\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.752126 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0be8-account-create-r6lvl"] Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.757047 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-operator-scripts\") pod \"neutron-db-create-pkrzc\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.781236 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn5n4\" (UniqueName: \"kubernetes.io/projected/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-kube-api-access-kn5n4\") pod \"neutron-db-create-pkrzc\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.853920 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tvrw\" (UniqueName: \"kubernetes.io/projected/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-kube-api-access-7tvrw\") pod \"neutron-0be8-account-create-r6lvl\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.854197 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-operator-scripts\") pod \"neutron-0be8-account-create-r6lvl\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.855142 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-operator-scripts\") pod \"neutron-0be8-account-create-r6lvl\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.877789 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tvrw\" (UniqueName: \"kubernetes.io/projected/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-kube-api-access-7tvrw\") pod \"neutron-0be8-account-create-r6lvl\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.961495 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:27:48 crc kubenswrapper[4801]: I1124 21:27:48.986243 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.055168 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.068311 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.348455 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-b44fx"] Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.379002 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cbqw2"] Nov 24 21:27:49 crc kubenswrapper[4801]: E1124 21:27:49.686327 4801 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.83:52262->38.102.83.83:34545: write tcp 38.102.83.83:52262->38.102.83.83:34545: write: broken pipe Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.711199 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e180-account-create-xzfhs"] Nov 24 21:27:49 crc kubenswrapper[4801]: E1124 21:27:49.693910 4801 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.83:52262->38.102.83.83:34545: write tcp 192.168.126.11:10250->192.168.126.11:35202: write: broken pipe Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.717323 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cbqw2" event={"ID":"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0","Type":"ContainerStarted","Data":"9affab7964613877643de73061089e0e5e127cddcf418081d7961fc613119d3a"} Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.732699 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b44fx" event={"ID":"e758b4dd-3072-46bb-9e96-c64dc748c8e4","Type":"ContainerStarted","Data":"e1bc2629668666f12edaedb39725582c93c1f356474736533913cca83eeecce9"} Nov 24 21:27:49 crc kubenswrapper[4801]: I1124 21:27:49.841943 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8k6hl"] Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.156094 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pkrzc"] Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.178224 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-08fa-account-create-lpjbf"] Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.192567 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mqjvg"] Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.225114 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e08c-account-create-vct62"] Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.306397 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0be8-account-create-r6lvl"] Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.758691 4801 generic.go:334] "Generic (PLEG): container finished" podID="eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" containerID="44acd4245672b1699582211b97f3448966638a1333ee775e900900cf01a22a2a" exitCode=0 Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.759889 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cbqw2" event={"ID":"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0","Type":"ContainerDied","Data":"44acd4245672b1699582211b97f3448966638a1333ee775e900900cf01a22a2a"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.762958 4801 generic.go:334] "Generic (PLEG): container finished" podID="e758b4dd-3072-46bb-9e96-c64dc748c8e4" containerID="e8782d32b55f7efdbbdf83274b894c9abea362b690b0f5c30d4d1a52928c6104" exitCode=0 Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.763020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b44fx" event={"ID":"e758b4dd-3072-46bb-9e96-c64dc748c8e4","Type":"ContainerDied","Data":"e8782d32b55f7efdbbdf83274b894c9abea362b690b0f5c30d4d1a52928c6104"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.765327 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e08c-account-create-vct62" event={"ID":"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31","Type":"ContainerStarted","Data":"1e4525db2c0d3f8a40bb6b6ed967aa774649d0f81db0416264dbdb697d5f69d8"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.768838 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mqjvg" event={"ID":"faa8d5ae-1109-4365-8360-ad5ac4bd8198","Type":"ContainerStarted","Data":"9d4423e600a947f4dcfd62b97a0d096214e5d7bcfe2d3f46d5ef7dd0f5981c7d"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.771061 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pkrzc" event={"ID":"00ffebca-fff7-4df2-8db3-b302a7fcd3e1","Type":"ContainerStarted","Data":"9facf0c61f550af267ee9e1480c6dc430c5e1f0d7eeed81a12030eb94db6543a"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.774443 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8k6hl" event={"ID":"d88357b0-8aad-4e9a-981f-d520525938a1","Type":"ContainerStarted","Data":"c8d9195f5299383076b86de0a020ebe3e859b2ad5c743facf6d07f0ce65f0fea"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.774473 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8k6hl" event={"ID":"d88357b0-8aad-4e9a-981f-d520525938a1","Type":"ContainerStarted","Data":"b23509092c9ed47aa46ed140c47fb3b13bdc0c7161eba5f066d38702f879b7b0"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.782449 4801 generic.go:334] "Generic (PLEG): container finished" podID="06671e3a-592e-476d-bc4b-74646cfc034b" containerID="5219f2e69a8e0430149fe5c684e1cd7ec19f550414a83e1a4c49d22d3ee5ca12" exitCode=0 Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.782530 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e180-account-create-xzfhs" event={"ID":"06671e3a-592e-476d-bc4b-74646cfc034b","Type":"ContainerDied","Data":"5219f2e69a8e0430149fe5c684e1cd7ec19f550414a83e1a4c49d22d3ee5ca12"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.782558 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e180-account-create-xzfhs" event={"ID":"06671e3a-592e-476d-bc4b-74646cfc034b","Type":"ContainerStarted","Data":"b9f7cb404789933aee1881fc1a66c3d9e0fcb49cae1f86678b12d0c67715fb20"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.785264 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0be8-account-create-r6lvl" event={"ID":"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f","Type":"ContainerStarted","Data":"af1e87b4c9c80b8d22da737586228fd006ba1bd14dcbe30ecca41b6d7ce7dc1e"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.796675 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-08fa-account-create-lpjbf" event={"ID":"6d62c145-e144-4f0a-8dcb-73d67b536463","Type":"ContainerStarted","Data":"d196b1d20acf0c45a9c68334d48c94ca9fbc4b54cf81f7c3e412f9f699a3a309"} Nov 24 21:27:50 crc kubenswrapper[4801]: I1124 21:27:50.802265 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-e08c-account-create-vct62" podStartSLOduration=2.802250854 podStartE2EDuration="2.802250854s" podCreationTimestamp="2025-11-24 21:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:50.796928873 +0000 UTC m=+1242.879515543" watchObservedRunningTime="2025-11-24 21:27:50.802250854 +0000 UTC m=+1242.884837524" Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.148360 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.228013 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ntxvj"] Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.228838 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-ntxvj" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="dnsmasq-dns" containerID="cri-o://b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277" gracePeriod=10 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.801890 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.819428 4801 generic.go:334] "Generic (PLEG): container finished" podID="87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" containerID="34c83b35759f47d9c765a8eb6fb62071b13db6326b9aed2dede1a6b3849132ab" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.819622 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0be8-account-create-r6lvl" event={"ID":"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f","Type":"ContainerDied","Data":"34c83b35759f47d9c765a8eb6fb62071b13db6326b9aed2dede1a6b3849132ab"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.821676 4801 generic.go:334] "Generic (PLEG): container finished" podID="6d62c145-e144-4f0a-8dcb-73d67b536463" containerID="651d976016bdef41ced6683d410fd9347fd673ea02407ed76d22dd888b325f29" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.821743 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-08fa-account-create-lpjbf" event={"ID":"6d62c145-e144-4f0a-8dcb-73d67b536463","Type":"ContainerDied","Data":"651d976016bdef41ced6683d410fd9347fd673ea02407ed76d22dd888b325f29"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.826188 4801 generic.go:334] "Generic (PLEG): container finished" podID="2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" containerID="49d3d4f6b082362338c9f4fdb755861e2f12c23bad176b88c48490518d77cb63" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.826238 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e08c-account-create-vct62" event={"ID":"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31","Type":"ContainerDied","Data":"49d3d4f6b082362338c9f4fdb755861e2f12c23bad176b88c48490518d77cb63"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.835987 4801 generic.go:334] "Generic (PLEG): container finished" podID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerID="b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.836048 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ntxvj" Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.836124 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ntxvj" event={"ID":"ac7b18f4-5d92-491c-a4f9-de94a69c61f1","Type":"ContainerDied","Data":"b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.836160 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ntxvj" event={"ID":"ac7b18f4-5d92-491c-a4f9-de94a69c61f1","Type":"ContainerDied","Data":"2690973a3544b210914f7d4e8ad1b8a9bf2cf194ee87f630fe970fcd2b5eba35"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.836186 4801 scope.go:117] "RemoveContainer" containerID="b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277" Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.866799 4801 generic.go:334] "Generic (PLEG): container finished" podID="e4b29e97-8d63-4b0d-8df1-9e832ecfaf17" containerID="9b306097f579ed9be8ca9a31f0eac180f6dfd3d8d57eb7782f100148f28cd190" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.867343 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17","Type":"ContainerDied","Data":"9b306097f579ed9be8ca9a31f0eac180f6dfd3d8d57eb7782f100148f28cd190"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.886453 4801 generic.go:334] "Generic (PLEG): container finished" podID="00ffebca-fff7-4df2-8db3-b302a7fcd3e1" containerID="750798ae3a0ed6ce1fc9e0c320acc3bb9ff19af3784c657fc6a9c3b592607bae" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.886588 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pkrzc" event={"ID":"00ffebca-fff7-4df2-8db3-b302a7fcd3e1","Type":"ContainerDied","Data":"750798ae3a0ed6ce1fc9e0c320acc3bb9ff19af3784c657fc6a9c3b592607bae"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.892936 4801 generic.go:334] "Generic (PLEG): container finished" podID="d88357b0-8aad-4e9a-981f-d520525938a1" containerID="c8d9195f5299383076b86de0a020ebe3e859b2ad5c743facf6d07f0ce65f0fea" exitCode=0 Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.893407 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8k6hl" event={"ID":"d88357b0-8aad-4e9a-981f-d520525938a1","Type":"ContainerDied","Data":"c8d9195f5299383076b86de0a020ebe3e859b2ad5c743facf6d07f0ce65f0fea"} Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.909974 4801 scope.go:117] "RemoveContainer" containerID="3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54" Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.985856 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-nb\") pod \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.986660 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-config\") pod \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.986743 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-sb\") pod \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.986792 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-dns-svc\") pod \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " Nov 24 21:27:51 crc kubenswrapper[4801]: I1124 21:27:51.986852 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cfd9\" (UniqueName: \"kubernetes.io/projected/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-kube-api-access-8cfd9\") pod \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\" (UID: \"ac7b18f4-5d92-491c-a4f9-de94a69c61f1\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.001892 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-kube-api-access-8cfd9" (OuterVolumeSpecName: "kube-api-access-8cfd9") pod "ac7b18f4-5d92-491c-a4f9-de94a69c61f1" (UID: "ac7b18f4-5d92-491c-a4f9-de94a69c61f1"). InnerVolumeSpecName "kube-api-access-8cfd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.055909 4801 scope.go:117] "RemoveContainer" containerID="b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277" Nov 24 21:27:52 crc kubenswrapper[4801]: E1124 21:27:52.057054 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277\": container with ID starting with b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277 not found: ID does not exist" containerID="b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.057092 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277"} err="failed to get container status \"b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277\": rpc error: code = NotFound desc = could not find container \"b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277\": container with ID starting with b765dcb76b13735abe15ac1f8d55118a9d880de2402e8e571672ac50021e6277 not found: ID does not exist" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.057120 4801 scope.go:117] "RemoveContainer" containerID="3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54" Nov 24 21:27:52 crc kubenswrapper[4801]: E1124 21:27:52.062034 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54\": container with ID starting with 3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54 not found: ID does not exist" containerID="3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.062069 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54"} err="failed to get container status \"3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54\": rpc error: code = NotFound desc = could not find container \"3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54\": container with ID starting with 3df339aa437b68a1205613d8965f272cc0649a744f65332f11560792eef88c54 not found: ID does not exist" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.063267 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac7b18f4-5d92-491c-a4f9-de94a69c61f1" (UID: "ac7b18f4-5d92-491c-a4f9-de94a69c61f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.084057 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-config" (OuterVolumeSpecName: "config") pod "ac7b18f4-5d92-491c-a4f9-de94a69c61f1" (UID: "ac7b18f4-5d92-491c-a4f9-de94a69c61f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.085447 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac7b18f4-5d92-491c-a4f9-de94a69c61f1" (UID: "ac7b18f4-5d92-491c-a4f9-de94a69c61f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.090495 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.090527 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.090540 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cfd9\" (UniqueName: \"kubernetes.io/projected/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-kube-api-access-8cfd9\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.090552 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.129686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac7b18f4-5d92-491c-a4f9-de94a69c61f1" (UID: "ac7b18f4-5d92-491c-a4f9-de94a69c61f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.192817 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac7b18f4-5d92-491c-a4f9-de94a69c61f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.207029 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ntxvj"] Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.221002 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ntxvj"] Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.542955 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.602579 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88357b0-8aad-4e9a-981f-d520525938a1-operator-scripts\") pod \"d88357b0-8aad-4e9a-981f-d520525938a1\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.603187 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd4j5\" (UniqueName: \"kubernetes.io/projected/d88357b0-8aad-4e9a-981f-d520525938a1-kube-api-access-sd4j5\") pod \"d88357b0-8aad-4e9a-981f-d520525938a1\" (UID: \"d88357b0-8aad-4e9a-981f-d520525938a1\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.603501 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88357b0-8aad-4e9a-981f-d520525938a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d88357b0-8aad-4e9a-981f-d520525938a1" (UID: "d88357b0-8aad-4e9a-981f-d520525938a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.603912 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88357b0-8aad-4e9a-981f-d520525938a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.609801 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88357b0-8aad-4e9a-981f-d520525938a1-kube-api-access-sd4j5" (OuterVolumeSpecName: "kube-api-access-sd4j5") pod "d88357b0-8aad-4e9a-981f-d520525938a1" (UID: "d88357b0-8aad-4e9a-981f-d520525938a1"). InnerVolumeSpecName "kube-api-access-sd4j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.671884 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b44fx" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.678789 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" path="/var/lib/kubelet/pods/ac7b18f4-5d92-491c-a4f9-de94a69c61f1/volumes" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.679770 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.700136 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.704742 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e758b4dd-3072-46bb-9e96-c64dc748c8e4-operator-scripts\") pod \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.704858 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-operator-scripts\") pod \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.704930 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhcs\" (UniqueName: \"kubernetes.io/projected/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-kube-api-access-fqhcs\") pod \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\" (UID: \"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.704955 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4k4m\" (UniqueName: \"kubernetes.io/projected/e758b4dd-3072-46bb-9e96-c64dc748c8e4-kube-api-access-z4k4m\") pod \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\" (UID: \"e758b4dd-3072-46bb-9e96-c64dc748c8e4\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.706512 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" (UID: "eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.711281 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd4j5\" (UniqueName: \"kubernetes.io/projected/d88357b0-8aad-4e9a-981f-d520525938a1-kube-api-access-sd4j5\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.711374 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.715702 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e758b4dd-3072-46bb-9e96-c64dc748c8e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e758b4dd-3072-46bb-9e96-c64dc748c8e4" (UID: "e758b4dd-3072-46bb-9e96-c64dc748c8e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.717507 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e758b4dd-3072-46bb-9e96-c64dc748c8e4-kube-api-access-z4k4m" (OuterVolumeSpecName: "kube-api-access-z4k4m") pod "e758b4dd-3072-46bb-9e96-c64dc748c8e4" (UID: "e758b4dd-3072-46bb-9e96-c64dc748c8e4"). InnerVolumeSpecName "kube-api-access-z4k4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.727329 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-kube-api-access-fqhcs" (OuterVolumeSpecName: "kube-api-access-fqhcs") pod "eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" (UID: "eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0"). InnerVolumeSpecName "kube-api-access-fqhcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.812829 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qq7v\" (UniqueName: \"kubernetes.io/projected/06671e3a-592e-476d-bc4b-74646cfc034b-kube-api-access-4qq7v\") pod \"06671e3a-592e-476d-bc4b-74646cfc034b\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.813876 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06671e3a-592e-476d-bc4b-74646cfc034b-operator-scripts\") pod \"06671e3a-592e-476d-bc4b-74646cfc034b\" (UID: \"06671e3a-592e-476d-bc4b-74646cfc034b\") " Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.814480 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06671e3a-592e-476d-bc4b-74646cfc034b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06671e3a-592e-476d-bc4b-74646cfc034b" (UID: "06671e3a-592e-476d-bc4b-74646cfc034b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.814997 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06671e3a-592e-476d-bc4b-74646cfc034b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.815041 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e758b4dd-3072-46bb-9e96-c64dc748c8e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.815055 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqhcs\" (UniqueName: \"kubernetes.io/projected/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0-kube-api-access-fqhcs\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.815070 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4k4m\" (UniqueName: \"kubernetes.io/projected/e758b4dd-3072-46bb-9e96-c64dc748c8e4-kube-api-access-z4k4m\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.816876 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06671e3a-592e-476d-bc4b-74646cfc034b-kube-api-access-4qq7v" (OuterVolumeSpecName: "kube-api-access-4qq7v") pod "06671e3a-592e-476d-bc4b-74646cfc034b" (UID: "06671e3a-592e-476d-bc4b-74646cfc034b"). InnerVolumeSpecName "kube-api-access-4qq7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.909441 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cbqw2" event={"ID":"eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0","Type":"ContainerDied","Data":"9affab7964613877643de73061089e0e5e127cddcf418081d7961fc613119d3a"} Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.909490 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9affab7964613877643de73061089e0e5e127cddcf418081d7961fc613119d3a" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.909493 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbqw2" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.911909 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b44fx" event={"ID":"e758b4dd-3072-46bb-9e96-c64dc748c8e4","Type":"ContainerDied","Data":"e1bc2629668666f12edaedb39725582c93c1f356474736533913cca83eeecce9"} Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.911967 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1bc2629668666f12edaedb39725582c93c1f356474736533913cca83eeecce9" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.912042 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b44fx" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.921236 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qq7v\" (UniqueName: \"kubernetes.io/projected/06671e3a-592e-476d-bc4b-74646cfc034b-kube-api-access-4qq7v\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.940694 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17","Type":"ContainerStarted","Data":"db38270d4f7731499bda4819a87324b171f37ef94ec25a22c5e31476cbfa0956"} Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.943888 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8k6hl" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.944987 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8k6hl" event={"ID":"d88357b0-8aad-4e9a-981f-d520525938a1","Type":"ContainerDied","Data":"b23509092c9ed47aa46ed140c47fb3b13bdc0c7161eba5f066d38702f879b7b0"} Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.945042 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23509092c9ed47aa46ed140c47fb3b13bdc0c7161eba5f066d38702f879b7b0" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.947227 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e180-account-create-xzfhs" Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.947222 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e180-account-create-xzfhs" event={"ID":"06671e3a-592e-476d-bc4b-74646cfc034b","Type":"ContainerDied","Data":"b9f7cb404789933aee1881fc1a66c3d9e0fcb49cae1f86678b12d0c67715fb20"} Nov 24 21:27:52 crc kubenswrapper[4801]: I1124 21:27:52.947288 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f7cb404789933aee1881fc1a66c3d9e0fcb49cae1f86678b12d0c67715fb20" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.239705 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.333388 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzkmf\" (UniqueName: \"kubernetes.io/projected/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-kube-api-access-xzkmf\") pod \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.334042 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-operator-scripts\") pod \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\" (UID: \"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31\") " Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.334641 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" (UID: "2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.335573 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.337620 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-kube-api-access-xzkmf" (OuterVolumeSpecName: "kube-api-access-xzkmf") pod "2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" (UID: "2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31"). InnerVolumeSpecName "kube-api-access-xzkmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.437569 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzkmf\" (UniqueName: \"kubernetes.io/projected/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31-kube-api-access-xzkmf\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.553379 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.559503 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.641515 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d62c145-e144-4f0a-8dcb-73d67b536463-operator-scripts\") pod \"6d62c145-e144-4f0a-8dcb-73d67b536463\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.641972 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-operator-scripts\") pod \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.642028 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hl97\" (UniqueName: \"kubernetes.io/projected/6d62c145-e144-4f0a-8dcb-73d67b536463-kube-api-access-8hl97\") pod \"6d62c145-e144-4f0a-8dcb-73d67b536463\" (UID: \"6d62c145-e144-4f0a-8dcb-73d67b536463\") " Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.642053 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn5n4\" (UniqueName: \"kubernetes.io/projected/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-kube-api-access-kn5n4\") pod \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\" (UID: \"00ffebca-fff7-4df2-8db3-b302a7fcd3e1\") " Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.642158 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d62c145-e144-4f0a-8dcb-73d67b536463-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d62c145-e144-4f0a-8dcb-73d67b536463" (UID: "6d62c145-e144-4f0a-8dcb-73d67b536463"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.642585 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00ffebca-fff7-4df2-8db3-b302a7fcd3e1" (UID: "00ffebca-fff7-4df2-8db3-b302a7fcd3e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.643308 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d62c145-e144-4f0a-8dcb-73d67b536463-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.643327 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.660147 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-kube-api-access-kn5n4" (OuterVolumeSpecName: "kube-api-access-kn5n4") pod "00ffebca-fff7-4df2-8db3-b302a7fcd3e1" (UID: "00ffebca-fff7-4df2-8db3-b302a7fcd3e1"). InnerVolumeSpecName "kube-api-access-kn5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:53 crc kubenswrapper[4801]: I1124 21:27:53.749286 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn5n4\" (UniqueName: \"kubernetes.io/projected/00ffebca-fff7-4df2-8db3-b302a7fcd3e1-kube-api-access-kn5n4\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.138940 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d62c145-e144-4f0a-8dcb-73d67b536463-kube-api-access-8hl97" (OuterVolumeSpecName: "kube-api-access-8hl97") pod "6d62c145-e144-4f0a-8dcb-73d67b536463" (UID: "6d62c145-e144-4f0a-8dcb-73d67b536463"). InnerVolumeSpecName "kube-api-access-8hl97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.167910 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hl97\" (UniqueName: \"kubernetes.io/projected/6d62c145-e144-4f0a-8dcb-73d67b536463-kube-api-access-8hl97\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.208323 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e08c-account-create-vct62" event={"ID":"2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31","Type":"ContainerDied","Data":"1e4525db2c0d3f8a40bb6b6ed967aa774649d0f81db0416264dbdb697d5f69d8"} Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.208426 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4525db2c0d3f8a40bb6b6ed967aa774649d0f81db0416264dbdb697d5f69d8" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.208610 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e08c-account-create-vct62" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.213561 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pkrzc" event={"ID":"00ffebca-fff7-4df2-8db3-b302a7fcd3e1","Type":"ContainerDied","Data":"9facf0c61f550af267ee9e1480c6dc430c5e1f0d7eeed81a12030eb94db6543a"} Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.213629 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9facf0c61f550af267ee9e1480c6dc430c5e1f0d7eeed81a12030eb94db6543a" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.213767 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pkrzc" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.220910 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-08fa-account-create-lpjbf" event={"ID":"6d62c145-e144-4f0a-8dcb-73d67b536463","Type":"ContainerDied","Data":"d196b1d20acf0c45a9c68334d48c94ca9fbc4b54cf81f7c3e412f9f699a3a309"} Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.221356 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d196b1d20acf0c45a9c68334d48c94ca9fbc4b54cf81f7c3e412f9f699a3a309" Nov 24 21:27:54 crc kubenswrapper[4801]: I1124 21:27:54.221468 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-08fa-account-create-lpjbf" Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.255061 4801 generic.go:334] "Generic (PLEG): container finished" podID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" containerID="a1ebcbb169350fb7019eb7660e9579fa8b4693f51417052daeb4ca7b23b233f6" exitCode=0 Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.255144 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m4tng" event={"ID":"b7d70dfa-a9e5-417b-9506-95bd490da3ef","Type":"ContainerDied","Data":"a1ebcbb169350fb7019eb7660e9579fa8b4693f51417052daeb4ca7b23b233f6"} Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.261339 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17","Type":"ContainerStarted","Data":"a00d57b74d28dca1bbd1f1081c41da1c6edda589180f2dc1392f26d1a8289bb8"} Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.503583 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.531262 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tvrw\" (UniqueName: \"kubernetes.io/projected/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-kube-api-access-7tvrw\") pod \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.531512 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-operator-scripts\") pod \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\" (UID: \"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f\") " Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.533887 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" (UID: "87237f3a-08f9-46a9-930e-a3ca7bf1cd9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.571077 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-kube-api-access-7tvrw" (OuterVolumeSpecName: "kube-api-access-7tvrw") pod "87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" (UID: "87237f3a-08f9-46a9-930e-a3ca7bf1cd9f"). InnerVolumeSpecName "kube-api-access-7tvrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.635549 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.635608 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tvrw\" (UniqueName: \"kubernetes.io/projected/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f-kube-api-access-7tvrw\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:56 crc kubenswrapper[4801]: I1124 21:27:56.709595 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-ntxvj" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: i/o timeout" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.274339 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mqjvg" event={"ID":"faa8d5ae-1109-4365-8360-ad5ac4bd8198","Type":"ContainerStarted","Data":"7d16d1d1dd95e06912c9264b33d7cad827fc2c1c8e48894f08c456eb4058e414"} Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.278509 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e4b29e97-8d63-4b0d-8df1-9e832ecfaf17","Type":"ContainerStarted","Data":"bccc959c3d23a9306da40a5a25f1ee8d949e468a10a8fcfdc2cc1336b34c2c36"} Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.280745 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0be8-account-create-r6lvl" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.280778 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0be8-account-create-r6lvl" event={"ID":"87237f3a-08f9-46a9-930e-a3ca7bf1cd9f","Type":"ContainerDied","Data":"af1e87b4c9c80b8d22da737586228fd006ba1bd14dcbe30ecca41b6d7ce7dc1e"} Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.280802 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1e87b4c9c80b8d22da737586228fd006ba1bd14dcbe30ecca41b6d7ce7dc1e" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.307745 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mqjvg" podStartSLOduration=3.063658327 podStartE2EDuration="9.307711712s" podCreationTimestamp="2025-11-24 21:27:48 +0000 UTC" firstStartedPulling="2025-11-24 21:27:50.251129854 +0000 UTC m=+1242.333716524" lastFinishedPulling="2025-11-24 21:27:56.495183219 +0000 UTC m=+1248.577769909" observedRunningTime="2025-11-24 21:27:57.294889545 +0000 UTC m=+1249.377476225" watchObservedRunningTime="2025-11-24 21:27:57.307711712 +0000 UTC m=+1249.390298422" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.333255 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.333230111 podStartE2EDuration="19.333230111s" podCreationTimestamp="2025-11-24 21:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:27:57.326163237 +0000 UTC m=+1249.408749917" watchObservedRunningTime="2025-11-24 21:27:57.333230111 +0000 UTC m=+1249.415816781" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.772086 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.875442 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-combined-ca-bundle\") pod \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.875635 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-db-sync-config-data\") pod \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.875683 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-config-data\") pod \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.875731 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/b7d70dfa-a9e5-417b-9506-95bd490da3ef-kube-api-access-x25bk\") pod \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\" (UID: \"b7d70dfa-a9e5-417b-9506-95bd490da3ef\") " Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.883000 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b7d70dfa-a9e5-417b-9506-95bd490da3ef" (UID: "b7d70dfa-a9e5-417b-9506-95bd490da3ef"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.883300 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d70dfa-a9e5-417b-9506-95bd490da3ef-kube-api-access-x25bk" (OuterVolumeSpecName: "kube-api-access-x25bk") pod "b7d70dfa-a9e5-417b-9506-95bd490da3ef" (UID: "b7d70dfa-a9e5-417b-9506-95bd490da3ef"). InnerVolumeSpecName "kube-api-access-x25bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.909582 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7d70dfa-a9e5-417b-9506-95bd490da3ef" (UID: "b7d70dfa-a9e5-417b-9506-95bd490da3ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.940394 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-config-data" (OuterVolumeSpecName: "config-data") pod "b7d70dfa-a9e5-417b-9506-95bd490da3ef" (UID: "b7d70dfa-a9e5-417b-9506-95bd490da3ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.978802 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.978846 4801 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.978862 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d70dfa-a9e5-417b-9506-95bd490da3ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:57 crc kubenswrapper[4801]: I1124 21:27:57.978875 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/b7d70dfa-a9e5-417b-9506-95bd490da3ef-kube-api-access-x25bk\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.301284 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m4tng" event={"ID":"b7d70dfa-a9e5-417b-9506-95bd490da3ef","Type":"ContainerDied","Data":"6885f9df5cc6fdf7340e5dd96c2db367f67e843724bb9f686439c5b478a969c4"} Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.301384 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6885f9df5cc6fdf7340e5dd96c2db367f67e843724bb9f686439c5b478a969c4" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.303450 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m4tng" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.707285 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5rc4w"] Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708332 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708354 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708384 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06671e3a-592e-476d-bc4b-74646cfc034b" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708393 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="06671e3a-592e-476d-bc4b-74646cfc034b" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708405 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" containerName="glance-db-sync" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708412 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" containerName="glance-db-sync" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708452 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88357b0-8aad-4e9a-981f-d520525938a1" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708461 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88357b0-8aad-4e9a-981f-d520525938a1" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708472 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="init" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708478 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="init" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708491 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="dnsmasq-dns" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708498 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="dnsmasq-dns" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708514 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e758b4dd-3072-46bb-9e96-c64dc748c8e4" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708522 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e758b4dd-3072-46bb-9e96-c64dc748c8e4" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708537 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ffebca-fff7-4df2-8db3-b302a7fcd3e1" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708543 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ffebca-fff7-4df2-8db3-b302a7fcd3e1" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708558 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708564 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708581 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708588 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: E1124 21:27:58.708599 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d62c145-e144-4f0a-8dcb-73d67b536463" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708606 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d62c145-e144-4f0a-8dcb-73d67b536463" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708837 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88357b0-8aad-4e9a-981f-d520525938a1" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708849 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7b18f4-5d92-491c-a4f9-de94a69c61f1" containerName="dnsmasq-dns" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708860 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" containerName="glance-db-sync" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708873 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="06671e3a-592e-476d-bc4b-74646cfc034b" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708889 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708900 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ffebca-fff7-4df2-8db3-b302a7fcd3e1" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708923 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708941 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708955 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d62c145-e144-4f0a-8dcb-73d67b536463" containerName="mariadb-account-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.708971 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e758b4dd-3072-46bb-9e96-c64dc748c8e4" containerName="mariadb-database-create" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.710451 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.721004 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5rc4w"] Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.811742 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.811823 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.812027 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.812306 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9h8\" (UniqueName: \"kubernetes.io/projected/1709bb4a-ccf5-471d-9019-5bd4691071ad-kube-api-access-tr9h8\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.812491 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-config\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.812556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.917195 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9h8\" (UniqueName: \"kubernetes.io/projected/1709bb4a-ccf5-471d-9019-5bd4691071ad-kube-api-access-tr9h8\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.917314 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-config\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.917353 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.917459 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.917557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.918077 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.918848 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.918907 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.918941 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-config\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.918976 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.919705 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:58 crc kubenswrapper[4801]: I1124 21:27:58.938224 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9h8\" (UniqueName: \"kubernetes.io/projected/1709bb4a-ccf5-471d-9019-5bd4691071ad-kube-api-access-tr9h8\") pod \"dnsmasq-dns-74f6bcbc87-5rc4w\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:59 crc kubenswrapper[4801]: I1124 21:27:59.047800 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:27:59 crc kubenswrapper[4801]: I1124 21:27:59.211233 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:27:59 crc kubenswrapper[4801]: I1124 21:27:59.573876 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5rc4w"] Nov 24 21:27:59 crc kubenswrapper[4801]: W1124 21:27:59.583905 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1709bb4a_ccf5_471d_9019_5bd4691071ad.slice/crio-259590078c7c318a445570442e9dbedc6fc6ebead3432a7aaa33b26ee71fefc3 WatchSource:0}: Error finding container 259590078c7c318a445570442e9dbedc6fc6ebead3432a7aaa33b26ee71fefc3: Status 404 returned error can't find the container with id 259590078c7c318a445570442e9dbedc6fc6ebead3432a7aaa33b26ee71fefc3 Nov 24 21:28:00 crc kubenswrapper[4801]: I1124 21:28:00.335727 4801 generic.go:334] "Generic (PLEG): container finished" podID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerID="dc7d60856291cec30637dbe084bd427de707dd856111c19e0067b9ff68c5738a" exitCode=0 Nov 24 21:28:00 crc kubenswrapper[4801]: I1124 21:28:00.335839 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" event={"ID":"1709bb4a-ccf5-471d-9019-5bd4691071ad","Type":"ContainerDied","Data":"dc7d60856291cec30637dbe084bd427de707dd856111c19e0067b9ff68c5738a"} Nov 24 21:28:00 crc kubenswrapper[4801]: I1124 21:28:00.335896 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" event={"ID":"1709bb4a-ccf5-471d-9019-5bd4691071ad","Type":"ContainerStarted","Data":"259590078c7c318a445570442e9dbedc6fc6ebead3432a7aaa33b26ee71fefc3"} Nov 24 21:28:00 crc kubenswrapper[4801]: I1124 21:28:00.340327 4801 generic.go:334] "Generic (PLEG): container finished" podID="faa8d5ae-1109-4365-8360-ad5ac4bd8198" containerID="7d16d1d1dd95e06912c9264b33d7cad827fc2c1c8e48894f08c456eb4058e414" exitCode=0 Nov 24 21:28:00 crc kubenswrapper[4801]: I1124 21:28:00.340378 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mqjvg" event={"ID":"faa8d5ae-1109-4365-8360-ad5ac4bd8198","Type":"ContainerDied","Data":"7d16d1d1dd95e06912c9264b33d7cad827fc2c1c8e48894f08c456eb4058e414"} Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.353921 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" event={"ID":"1709bb4a-ccf5-471d-9019-5bd4691071ad","Type":"ContainerStarted","Data":"09b495bc35c9249cd510cd94691a9516c99df64623bc87f9fd1a29ebec2e0651"} Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.354544 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.388469 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" podStartSLOduration=3.38844107 podStartE2EDuration="3.38844107s" podCreationTimestamp="2025-11-24 21:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:01.379588173 +0000 UTC m=+1253.462174883" watchObservedRunningTime="2025-11-24 21:28:01.38844107 +0000 UTC m=+1253.471027780" Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.849484 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.923985 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-config-data\") pod \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.924658 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-combined-ca-bundle\") pod \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.924816 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7rwx\" (UniqueName: \"kubernetes.io/projected/faa8d5ae-1109-4365-8360-ad5ac4bd8198-kube-api-access-f7rwx\") pod \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\" (UID: \"faa8d5ae-1109-4365-8360-ad5ac4bd8198\") " Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.931688 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa8d5ae-1109-4365-8360-ad5ac4bd8198-kube-api-access-f7rwx" (OuterVolumeSpecName: "kube-api-access-f7rwx") pod "faa8d5ae-1109-4365-8360-ad5ac4bd8198" (UID: "faa8d5ae-1109-4365-8360-ad5ac4bd8198"). InnerVolumeSpecName "kube-api-access-f7rwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:01 crc kubenswrapper[4801]: I1124 21:28:01.959727 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faa8d5ae-1109-4365-8360-ad5ac4bd8198" (UID: "faa8d5ae-1109-4365-8360-ad5ac4bd8198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.003795 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-config-data" (OuterVolumeSpecName: "config-data") pod "faa8d5ae-1109-4365-8360-ad5ac4bd8198" (UID: "faa8d5ae-1109-4365-8360-ad5ac4bd8198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.027737 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.027777 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8d5ae-1109-4365-8360-ad5ac4bd8198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.027788 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7rwx\" (UniqueName: \"kubernetes.io/projected/faa8d5ae-1109-4365-8360-ad5ac4bd8198-kube-api-access-f7rwx\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.372589 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mqjvg" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.373563 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mqjvg" event={"ID":"faa8d5ae-1109-4365-8360-ad5ac4bd8198","Type":"ContainerDied","Data":"9d4423e600a947f4dcfd62b97a0d096214e5d7bcfe2d3f46d5ef7dd0f5981c7d"} Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.373661 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d4423e600a947f4dcfd62b97a0d096214e5d7bcfe2d3f46d5ef7dd0f5981c7d" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.708666 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5rc4w"] Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.721416 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hhvd4"] Nov 24 21:28:02 crc kubenswrapper[4801]: E1124 21:28:02.721954 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa8d5ae-1109-4365-8360-ad5ac4bd8198" containerName="keystone-db-sync" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.722014 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa8d5ae-1109-4365-8360-ad5ac4bd8198" containerName="keystone-db-sync" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.722264 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa8d5ae-1109-4365-8360-ad5ac4bd8198" containerName="keystone-db-sync" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.723221 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.727841 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.728114 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.728227 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s9sx4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.728379 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.728516 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.738029 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hhvd4"] Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.746412 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nm8xf"] Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.748667 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.839663 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nm8xf"] Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875034 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-combined-ca-bundle\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875105 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dwq\" (UniqueName: \"kubernetes.io/projected/a89c80c6-04e7-4fc1-aa02-556ad647c322-kube-api-access-z8dwq\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875146 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-config-data\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875352 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-svc\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875572 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-credential-keys\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875682 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-fernet-keys\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875721 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-scripts\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.875920 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.878033 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.878120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-config\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.878186 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.878343 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspfn\" (UniqueName: \"kubernetes.io/projected/59fafd61-3dc7-45b9-9285-ca2119d8123a-kube-api-access-tspfn\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.889820 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-x7r2c"] Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.892010 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.895763 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.895969 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-vv9tz" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.902613 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-x7r2c"] Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.982728 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-combined-ca-bundle\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.982896 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-combined-ca-bundle\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983723 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dwq\" (UniqueName: \"kubernetes.io/projected/a89c80c6-04e7-4fc1-aa02-556ad647c322-kube-api-access-z8dwq\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983793 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-config-data\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983832 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-config-data\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983857 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-svc\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983879 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-credential-keys\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983917 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-fernet-keys\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983944 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7sk\" (UniqueName: \"kubernetes.io/projected/45a6644f-aa22-41cb-bf2a-4930db050d45-kube-api-access-mr7sk\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.983977 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-scripts\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.984111 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.984142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.984174 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-config\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.984200 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.984258 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspfn\" (UniqueName: \"kubernetes.io/projected/59fafd61-3dc7-45b9-9285-ca2119d8123a-kube-api-access-tspfn\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.988066 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-config\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.988096 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.989342 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.989962 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.991967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-svc\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:02 crc kubenswrapper[4801]: I1124 21:28:02.996086 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-fernet-keys\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.006269 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-combined-ca-bundle\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.012442 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-credential-keys\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.015869 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-config-data\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.017833 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ccwmw"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.018176 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspfn\" (UniqueName: \"kubernetes.io/projected/59fafd61-3dc7-45b9-9285-ca2119d8123a-kube-api-access-tspfn\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.020390 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.025906 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4bhzv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.026033 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.025896 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-scripts\") pod \"keystone-bootstrap-hhvd4\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.026266 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.027936 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dwq\" (UniqueName: \"kubernetes.io/projected/a89c80c6-04e7-4fc1-aa02-556ad647c322-kube-api-access-z8dwq\") pod \"dnsmasq-dns-847c4cc679-nm8xf\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.061768 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ccwmw"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.087831 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-config-data\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.087924 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr7sk\" (UniqueName: \"kubernetes.io/projected/45a6644f-aa22-41cb-bf2a-4930db050d45-kube-api-access-mr7sk\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.087992 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-etc-machine-id\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.088070 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-scripts\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.088156 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-combined-ca-bundle\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.088209 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-config-data\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.088302 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-db-sync-config-data\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.088338 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-combined-ca-bundle\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.088382 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wjl\" (UniqueName: \"kubernetes.io/projected/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-kube-api-access-p7wjl\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.118028 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.123036 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-config-data\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.143007 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.149499 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-combined-ca-bundle\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.151885 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr7sk\" (UniqueName: \"kubernetes.io/projected/45a6644f-aa22-41cb-bf2a-4930db050d45-kube-api-access-mr7sk\") pod \"heat-db-sync-x7r2c\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.256241 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-m42cd"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.269053 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.361288 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-etc-machine-id\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.364306 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-etc-machine-id\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.365633 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m42cd"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.365796 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.366490 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-scripts\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.373926 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-config-data\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.374161 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-db-sync-config-data\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.374220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-combined-ca-bundle\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.374262 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wjl\" (UniqueName: \"kubernetes.io/projected/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-kube-api-access-p7wjl\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.383440 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nm8xf"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.384892 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-config-data\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.399133 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-combined-ca-bundle\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.399692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-db-sync-config-data\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.403232 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9lsvs" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.403631 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.403867 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.431493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-scripts\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.466502 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wjl\" (UniqueName: \"kubernetes.io/projected/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-kube-api-access-p7wjl\") pod \"cinder-db-sync-ccwmw\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.476729 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pzzvv"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.477681 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerName="dnsmasq-dns" containerID="cri-o://09b495bc35c9249cd510cd94691a9516c99df64623bc87f9fd1a29ebec2e0651" gracePeriod=10 Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.482039 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.487501 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-scripts\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.488460 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5lkp\" (UniqueName: \"kubernetes.io/projected/c5d93222-44c6-4113-92f2-8a320aefbd82-kube-api-access-h5lkp\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.490186 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-combined-ca-bundle\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.489182 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sjm6h" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.489330 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.490079 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.491253 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d93222-44c6-4113-92f2-8a320aefbd82-logs\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.491381 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-config-data\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.521796 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jf5b6"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.526154 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.536163 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pzzvv"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.547417 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jf5b6"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.572459 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.579231 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.586088 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.586350 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.591477 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kmchz"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.602961 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-combined-ca-bundle\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.603054 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b99r6\" (UniqueName: \"kubernetes.io/projected/08d5bd12-e735-4141-a9ee-8ecd83139445-kube-api-access-b99r6\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610068 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5lkp\" (UniqueName: \"kubernetes.io/projected/c5d93222-44c6-4113-92f2-8a320aefbd82-kube-api-access-h5lkp\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610126 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-combined-ca-bundle\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610507 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-config\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610542 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfp6x\" (UniqueName: \"kubernetes.io/projected/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-kube-api-access-dfp6x\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610574 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610675 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d93222-44c6-4113-92f2-8a320aefbd82-logs\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610697 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610737 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-config-data\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610866 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-config\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.610894 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.611032 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-scripts\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.619720 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.662442 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.663179 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kmchz"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.665012 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d93222-44c6-4113-92f2-8a320aefbd82-logs\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.668786 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-scripts\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.670638 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.671074 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gndc7" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.675059 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-combined-ca-bundle\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.682147 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5lkp\" (UniqueName: \"kubernetes.io/projected/c5d93222-44c6-4113-92f2-8a320aefbd82-kube-api-access-h5lkp\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.685335 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-config-data\") pod \"placement-db-sync-m42cd\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.713331 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.713456 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.713624 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-scripts\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.713716 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdhl\" (UniqueName: \"kubernetes.io/projected/9916e08d-472d-47e4-a0ea-d8b67bb8faee-kube-api-access-vvdhl\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.713943 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-config\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.713971 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-log-httpd\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714049 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-combined-ca-bundle\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714073 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714113 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-run-httpd\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714177 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-combined-ca-bundle\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714202 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-db-sync-config-data\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b99r6\" (UniqueName: \"kubernetes.io/projected/08d5bd12-e735-4141-a9ee-8ecd83139445-kube-api-access-b99r6\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714332 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714480 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznbj\" (UniqueName: \"kubernetes.io/projected/953c737e-024f-41ba-9544-d1238b75519c-kube-api-access-mznbj\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714526 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-config-data\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714609 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-config\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714675 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfp6x\" (UniqueName: \"kubernetes.io/projected/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-kube-api-access-dfp6x\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.714780 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.716409 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.718698 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.718841 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-config\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.721702 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.724967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.725501 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.729075 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-config\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.730244 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-combined-ca-bundle\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.755869 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b99r6\" (UniqueName: \"kubernetes.io/projected/08d5bd12-e735-4141-a9ee-8ecd83139445-kube-api-access-b99r6\") pod \"dnsmasq-dns-785d8bcb8c-jf5b6\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.756567 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfp6x\" (UniqueName: \"kubernetes.io/projected/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-kube-api-access-dfp6x\") pod \"neutron-db-sync-pzzvv\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.791246 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.830825 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831333 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-scripts\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831393 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdhl\" (UniqueName: \"kubernetes.io/projected/9916e08d-472d-47e4-a0ea-d8b67bb8faee-kube-api-access-vvdhl\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831484 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-log-httpd\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831523 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-combined-ca-bundle\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831566 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831589 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-run-httpd\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831686 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-db-sync-config-data\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831820 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mznbj\" (UniqueName: \"kubernetes.io/projected/953c737e-024f-41ba-9544-d1238b75519c-kube-api-access-mznbj\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.831852 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-config-data\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.832880 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-log-httpd\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.840641 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-run-httpd\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.844226 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.845415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-scripts\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.858841 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdhl\" (UniqueName: \"kubernetes.io/projected/9916e08d-472d-47e4-a0ea-d8b67bb8faee-kube-api-access-vvdhl\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.859487 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mznbj\" (UniqueName: \"kubernetes.io/projected/953c737e-024f-41ba-9544-d1238b75519c-kube-api-access-mznbj\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.859687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-combined-ca-bundle\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.873099 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-config-data\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.876231 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " pod="openstack/ceilometer-0" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.876966 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.885423 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-db-sync-config-data\") pod \"barbican-db-sync-kmchz\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.914177 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:03 crc kubenswrapper[4801]: I1124 21:28:03.915150 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.055849 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.059731 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:04 crc kubenswrapper[4801]: W1124 21:28:04.062452 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59fafd61_3dc7_45b9_9285_ca2119d8123a.slice/crio-d916b7dbed69648240a56c3d27c5af19912d844dfb1b652c8685b7928b3bc6f2 WatchSource:0}: Error finding container d916b7dbed69648240a56c3d27c5af19912d844dfb1b652c8685b7928b3bc6f2: Status 404 returned error can't find the container with id d916b7dbed69648240a56c3d27c5af19912d844dfb1b652c8685b7928b3bc6f2 Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.064878 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.072227 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.072392 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.072876 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-68kgz" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.072891 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.112035 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.197708 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hhvd4"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.226565 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-logs\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.252409 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.252545 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.252744 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.252825 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-config-data\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.253090 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.253321 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzrcf\" (UniqueName: \"kubernetes.io/projected/354befed-1924-4c21-bfd8-ba1aa7423789-kube-api-access-wzrcf\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.253395 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-scripts\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.230848 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.259909 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.262099 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.263242 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.278348 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358220 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358303 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-logs\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358335 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndbfn\" (UniqueName: \"kubernetes.io/projected/1d85bf6a-fcc9-455a-be1b-84ea7be73590-kube-api-access-ndbfn\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358375 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358404 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358423 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358459 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358477 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358500 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358528 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-config-data\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358549 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358572 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358619 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358674 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzrcf\" (UniqueName: \"kubernetes.io/projected/354befed-1924-4c21-bfd8-ba1aa7423789-kube-api-access-wzrcf\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.358732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-scripts\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.360246 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-logs\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.360474 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.360860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.370888 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.370973 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nm8xf"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.371141 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.371526 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-scripts\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.393333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-config-data\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.426803 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzrcf\" (UniqueName: \"kubernetes.io/projected/354befed-1924-4c21-bfd8-ba1aa7423789-kube-api-access-wzrcf\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.441411 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461347 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461450 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461476 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461535 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461572 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461603 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461724 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.461782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndbfn\" (UniqueName: \"kubernetes.io/projected/1d85bf6a-fcc9-455a-be1b-84ea7be73590-kube-api-access-ndbfn\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.462603 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.466541 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.474257 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.474604 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.479620 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.488326 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.488902 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.496592 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndbfn\" (UniqueName: \"kubernetes.io/projected/1d85bf6a-fcc9-455a-be1b-84ea7be73590-kube-api-access-ndbfn\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.520763 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.542486 4801 generic.go:334] "Generic (PLEG): container finished" podID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerID="09b495bc35c9249cd510cd94691a9516c99df64623bc87f9fd1a29ebec2e0651" exitCode=0 Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.542570 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" event={"ID":"1709bb4a-ccf5-471d-9019-5bd4691071ad","Type":"ContainerDied","Data":"09b495bc35c9249cd510cd94691a9516c99df64623bc87f9fd1a29ebec2e0651"} Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.551034 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" event={"ID":"a89c80c6-04e7-4fc1-aa02-556ad647c322","Type":"ContainerStarted","Data":"a4b6162e9f6b6de6411f0b47530b50e12514f0870a4907a237122639c35cd3e7"} Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.567581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhvd4" event={"ID":"59fafd61-3dc7-45b9-9285-ca2119d8123a","Type":"ContainerStarted","Data":"d916b7dbed69648240a56c3d27c5af19912d844dfb1b652c8685b7928b3bc6f2"} Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.710272 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.713966 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.732280 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.768107 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-x7r2c"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.880266 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-nb\") pod \"1709bb4a-ccf5-471d-9019-5bd4691071ad\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.880407 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-svc\") pod \"1709bb4a-ccf5-471d-9019-5bd4691071ad\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.880775 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9h8\" (UniqueName: \"kubernetes.io/projected/1709bb4a-ccf5-471d-9019-5bd4691071ad-kube-api-access-tr9h8\") pod \"1709bb4a-ccf5-471d-9019-5bd4691071ad\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.880836 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-config\") pod \"1709bb4a-ccf5-471d-9019-5bd4691071ad\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.880885 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-sb\") pod \"1709bb4a-ccf5-471d-9019-5bd4691071ad\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.880928 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-swift-storage-0\") pod \"1709bb4a-ccf5-471d-9019-5bd4691071ad\" (UID: \"1709bb4a-ccf5-471d-9019-5bd4691071ad\") " Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.915250 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ccwmw"] Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.930384 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1709bb4a-ccf5-471d-9019-5bd4691071ad-kube-api-access-tr9h8" (OuterVolumeSpecName: "kube-api-access-tr9h8") pod "1709bb4a-ccf5-471d-9019-5bd4691071ad" (UID: "1709bb4a-ccf5-471d-9019-5bd4691071ad"). InnerVolumeSpecName "kube-api-access-tr9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:04 crc kubenswrapper[4801]: I1124 21:28:04.994995 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr9h8\" (UniqueName: \"kubernetes.io/projected/1709bb4a-ccf5-471d-9019-5bd4691071ad-kube-api-access-tr9h8\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.006056 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m42cd"] Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.241797 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-config" (OuterVolumeSpecName: "config") pod "1709bb4a-ccf5-471d-9019-5bd4691071ad" (UID: "1709bb4a-ccf5-471d-9019-5bd4691071ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.248298 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1709bb4a-ccf5-471d-9019-5bd4691071ad" (UID: "1709bb4a-ccf5-471d-9019-5bd4691071ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.249322 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1709bb4a-ccf5-471d-9019-5bd4691071ad" (UID: "1709bb4a-ccf5-471d-9019-5bd4691071ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.307264 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1709bb4a-ccf5-471d-9019-5bd4691071ad" (UID: "1709bb4a-ccf5-471d-9019-5bd4691071ad"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.334436 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.334498 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.334518 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.334530 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.382999 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1709bb4a-ccf5-471d-9019-5bd4691071ad" (UID: "1709bb4a-ccf5-471d-9019-5bd4691071ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.437897 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1709bb4a-ccf5-471d-9019-5bd4691071ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.572755 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.593798 4801 generic.go:334] "Generic (PLEG): container finished" podID="a89c80c6-04e7-4fc1-aa02-556ad647c322" containerID="b66add0f859007a48fdee1b378e9d025274be9ae0692538f3983e417619082d9" exitCode=0 Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.593892 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" event={"ID":"a89c80c6-04e7-4fc1-aa02-556ad647c322","Type":"ContainerDied","Data":"b66add0f859007a48fdee1b378e9d025274be9ae0692538f3983e417619082d9"} Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.614849 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhvd4" event={"ID":"59fafd61-3dc7-45b9-9285-ca2119d8123a","Type":"ContainerStarted","Data":"1d8d94a461742ebbc55f22e1f10cf2f69c821a5e0e3fe9446d05a4f18d8d494c"} Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.620405 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x7r2c" event={"ID":"45a6644f-aa22-41cb-bf2a-4930db050d45","Type":"ContainerStarted","Data":"068ebdcf2dc5c0c5ab8bf417e6a482e573ff174c98a7964c0b6a16b201a1fc86"} Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.628646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m42cd" event={"ID":"c5d93222-44c6-4113-92f2-8a320aefbd82","Type":"ContainerStarted","Data":"0ab8451cb7122cd2595704b32574b50cb72cb86d57312cb9854d096486105b61"} Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.633807 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jf5b6"] Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.642139 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" event={"ID":"1709bb4a-ccf5-471d-9019-5bd4691071ad","Type":"ContainerDied","Data":"259590078c7c318a445570442e9dbedc6fc6ebead3432a7aaa33b26ee71fefc3"} Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.642244 4801 scope.go:117] "RemoveContainer" containerID="09b495bc35c9249cd510cd94691a9516c99df64623bc87f9fd1a29ebec2e0651" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.642486 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5rc4w" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.663536 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ccwmw" event={"ID":"3f3b859c-0916-4b01-a41f-0b9fd4d8b204","Type":"ContainerStarted","Data":"c41b5ed3100ab3ffbe71e6b930dc453accc260fa29d3736dd8d09149551e931b"} Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.674139 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hhvd4" podStartSLOduration=3.674114331 podStartE2EDuration="3.674114331s" podCreationTimestamp="2025-11-24 21:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:05.652270003 +0000 UTC m=+1257.734856673" watchObservedRunningTime="2025-11-24 21:28:05.674114331 +0000 UTC m=+1257.756701001" Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.699330 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5rc4w"] Nov 24 21:28:05 crc kubenswrapper[4801]: W1124 21:28:05.712105 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d5bd12_e735_4141_a9ee_8ecd83139445.slice/crio-deed8041ceff5eff52677fffca30e83c5687630f7692e0bcb6dd0a9f803ced50 WatchSource:0}: Error finding container deed8041ceff5eff52677fffca30e83c5687630f7692e0bcb6dd0a9f803ced50: Status 404 returned error can't find the container with id deed8041ceff5eff52677fffca30e83c5687630f7692e0bcb6dd0a9f803ced50 Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.715548 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kmchz"] Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.751392 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5rc4w"] Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.784582 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.803599 4801 scope.go:117] "RemoveContainer" containerID="dc7d60856291cec30637dbe084bd427de707dd856111c19e0067b9ff68c5738a" Nov 24 21:28:05 crc kubenswrapper[4801]: W1124 21:28:05.844800 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953c737e_024f_41ba_9544_d1238b75519c.slice/crio-79cf500050a4b5bfe9bf453dd6f7d4ba65134f1976a17d63d9598b91473d003b WatchSource:0}: Error finding container 79cf500050a4b5bfe9bf453dd6f7d4ba65134f1976a17d63d9598b91473d003b: Status 404 returned error can't find the container with id 79cf500050a4b5bfe9bf453dd6f7d4ba65134f1976a17d63d9598b91473d003b Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.847622 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pzzvv"] Nov 24 21:28:05 crc kubenswrapper[4801]: W1124 21:28:05.880410 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a48374_ecd3_49fb_bddb_d9afa60a4ac6.slice/crio-7af319d3847467777e996d26f8b2f6a5ea188ecd98d21aeaca0b4909b1d5d9ab WatchSource:0}: Error finding container 7af319d3847467777e996d26f8b2f6a5ea188ecd98d21aeaca0b4909b1d5d9ab: Status 404 returned error can't find the container with id 7af319d3847467777e996d26f8b2f6a5ea188ecd98d21aeaca0b4909b1d5d9ab Nov 24 21:28:05 crc kubenswrapper[4801]: I1124 21:28:05.914480 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.286243 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.341887 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.371299 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.392206 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.479669 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-sb\") pod \"a89c80c6-04e7-4fc1-aa02-556ad647c322\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.480343 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-swift-storage-0\") pod \"a89c80c6-04e7-4fc1-aa02-556ad647c322\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.480453 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-nb\") pod \"a89c80c6-04e7-4fc1-aa02-556ad647c322\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.480498 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-svc\") pod \"a89c80c6-04e7-4fc1-aa02-556ad647c322\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.480528 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-config\") pod \"a89c80c6-04e7-4fc1-aa02-556ad647c322\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.480745 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8dwq\" (UniqueName: \"kubernetes.io/projected/a89c80c6-04e7-4fc1-aa02-556ad647c322-kube-api-access-z8dwq\") pod \"a89c80c6-04e7-4fc1-aa02-556ad647c322\" (UID: \"a89c80c6-04e7-4fc1-aa02-556ad647c322\") " Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.495785 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89c80c6-04e7-4fc1-aa02-556ad647c322-kube-api-access-z8dwq" (OuterVolumeSpecName: "kube-api-access-z8dwq") pod "a89c80c6-04e7-4fc1-aa02-556ad647c322" (UID: "a89c80c6-04e7-4fc1-aa02-556ad647c322"). InnerVolumeSpecName "kube-api-access-z8dwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.537474 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a89c80c6-04e7-4fc1-aa02-556ad647c322" (UID: "a89c80c6-04e7-4fc1-aa02-556ad647c322"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.552906 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a89c80c6-04e7-4fc1-aa02-556ad647c322" (UID: "a89c80c6-04e7-4fc1-aa02-556ad647c322"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.583489 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a89c80c6-04e7-4fc1-aa02-556ad647c322" (UID: "a89c80c6-04e7-4fc1-aa02-556ad647c322"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.589063 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.589116 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8dwq\" (UniqueName: \"kubernetes.io/projected/a89c80c6-04e7-4fc1-aa02-556ad647c322-kube-api-access-z8dwq\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.589131 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.589143 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.640899 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-config" (OuterVolumeSpecName: "config") pod "a89c80c6-04e7-4fc1-aa02-556ad647c322" (UID: "a89c80c6-04e7-4fc1-aa02-556ad647c322"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.652330 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a89c80c6-04e7-4fc1-aa02-556ad647c322" (UID: "a89c80c6-04e7-4fc1-aa02-556ad647c322"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.698350 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" path="/var/lib/kubelet/pods/1709bb4a-ccf5-471d-9019-5bd4691071ad/volumes" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.699228 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.699271 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89c80c6-04e7-4fc1-aa02-556ad647c322-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.718911 4801 generic.go:334] "Generic (PLEG): container finished" podID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerID="f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a" exitCode=0 Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.719112 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" event={"ID":"08d5bd12-e735-4141-a9ee-8ecd83139445","Type":"ContainerDied","Data":"f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.719147 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" event={"ID":"08d5bd12-e735-4141-a9ee-8ecd83139445","Type":"ContainerStarted","Data":"deed8041ceff5eff52677fffca30e83c5687630f7692e0bcb6dd0a9f803ced50"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.722671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pzzvv" event={"ID":"94a48374-ecd3-49fb-bddb-d9afa60a4ac6","Type":"ContainerStarted","Data":"b4405df26d211ac6160aed2435615cea115a4c9de09a802e40831646ed7aca74"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.722734 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pzzvv" event={"ID":"94a48374-ecd3-49fb-bddb-d9afa60a4ac6","Type":"ContainerStarted","Data":"7af319d3847467777e996d26f8b2f6a5ea188ecd98d21aeaca0b4909b1d5d9ab"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.731691 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d85bf6a-fcc9-455a-be1b-84ea7be73590","Type":"ContainerStarted","Data":"a5e64dab1ede223e3294600c14c59b627839ea475ac9fa41b63c181a03d351b6"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.738695 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmchz" event={"ID":"953c737e-024f-41ba-9544-d1238b75519c","Type":"ContainerStarted","Data":"79cf500050a4b5bfe9bf453dd6f7d4ba65134f1976a17d63d9598b91473d003b"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.807868 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pzzvv" podStartSLOduration=3.807831477 podStartE2EDuration="3.807831477s" podCreationTimestamp="2025-11-24 21:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:06.752031527 +0000 UTC m=+1258.834618207" watchObservedRunningTime="2025-11-24 21:28:06.807831477 +0000 UTC m=+1258.890418147" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.830098 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.830113 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-nm8xf" event={"ID":"a89c80c6-04e7-4fc1-aa02-556ad647c322","Type":"ContainerDied","Data":"a4b6162e9f6b6de6411f0b47530b50e12514f0870a4907a237122639c35cd3e7"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.830207 4801 scope.go:117] "RemoveContainer" containerID="b66add0f859007a48fdee1b378e9d025274be9ae0692538f3983e417619082d9" Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.838575 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"354befed-1924-4c21-bfd8-ba1aa7423789","Type":"ContainerStarted","Data":"023f6aba59e2c04b0b6a2bd72ce5617f3821569a86b4e5b5d1d20f4544fa96c1"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.843062 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerStarted","Data":"359485d66e82f6c71d8d6aeb87e1a51f06680cc1e9bf51372ee8e430060a2b32"} Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.963978 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nm8xf"] Nov 24 21:28:06 crc kubenswrapper[4801]: I1124 21:28:06.983810 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nm8xf"] Nov 24 21:28:07 crc kubenswrapper[4801]: I1124 21:28:07.896707 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d85bf6a-fcc9-455a-be1b-84ea7be73590","Type":"ContainerStarted","Data":"2365e42fc0d17ba54bd723263a8888d4ee065f555493c212b35fa4ddb0411234"} Nov 24 21:28:07 crc kubenswrapper[4801]: I1124 21:28:07.910392 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"354befed-1924-4c21-bfd8-ba1aa7423789","Type":"ContainerStarted","Data":"a91f8ec9a8799db727bccd090293662bea71c4587d9b35209ee9e372fcb5c827"} Nov 24 21:28:08 crc kubenswrapper[4801]: I1124 21:28:08.768732 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a89c80c6-04e7-4fc1-aa02-556ad647c322" path="/var/lib/kubelet/pods/a89c80c6-04e7-4fc1-aa02-556ad647c322/volumes" Nov 24 21:28:08 crc kubenswrapper[4801]: I1124 21:28:08.936763 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" event={"ID":"08d5bd12-e735-4141-a9ee-8ecd83139445","Type":"ContainerStarted","Data":"f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a"} Nov 24 21:28:08 crc kubenswrapper[4801]: I1124 21:28:08.937869 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:08 crc kubenswrapper[4801]: I1124 21:28:08.967678 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" podStartSLOduration=5.9676558 podStartE2EDuration="5.9676558s" podCreationTimestamp="2025-11-24 21:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:08.960101303 +0000 UTC m=+1261.042687993" watchObservedRunningTime="2025-11-24 21:28:08.9676558 +0000 UTC m=+1261.050242470" Nov 24 21:28:09 crc kubenswrapper[4801]: I1124 21:28:09.211760 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:28:09 crc kubenswrapper[4801]: I1124 21:28:09.228354 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.003067 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"354befed-1924-4c21-bfd8-ba1aa7423789","Type":"ContainerStarted","Data":"aceb79035f306d45ad68c1fe2c075f572a49f435b4c3eee40cdee4b189c9786b"} Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.003464 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-log" containerID="cri-o://a91f8ec9a8799db727bccd090293662bea71c4587d9b35209ee9e372fcb5c827" gracePeriod=30 Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.003762 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-httpd" containerID="cri-o://aceb79035f306d45ad68c1fe2c075f572a49f435b4c3eee40cdee4b189c9786b" gracePeriod=30 Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.010222 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-log" containerID="cri-o://2365e42fc0d17ba54bd723263a8888d4ee065f555493c212b35fa4ddb0411234" gracePeriod=30 Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.010293 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d85bf6a-fcc9-455a-be1b-84ea7be73590","Type":"ContainerStarted","Data":"505fa8a9a75d4b0fa17d830ebdb902ff9c9fc93c1dcf8f39336ee8cebd3622b3"} Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.011425 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-httpd" containerID="cri-o://505fa8a9a75d4b0fa17d830ebdb902ff9c9fc93c1dcf8f39336ee8cebd3622b3" gracePeriod=30 Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.023621 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.073515 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.073493427 podStartE2EDuration="8.073493427s" podCreationTimestamp="2025-11-24 21:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:10.066143065 +0000 UTC m=+1262.148729745" watchObservedRunningTime="2025-11-24 21:28:10.073493427 +0000 UTC m=+1262.156080097" Nov 24 21:28:10 crc kubenswrapper[4801]: I1124 21:28:10.113676 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.113658487 podStartE2EDuration="8.113658487s" podCreationTimestamp="2025-11-24 21:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:10.111713318 +0000 UTC m=+1262.194299988" watchObservedRunningTime="2025-11-24 21:28:10.113658487 +0000 UTC m=+1262.196245157" Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.027043 4801 generic.go:334] "Generic (PLEG): container finished" podID="354befed-1924-4c21-bfd8-ba1aa7423789" containerID="aceb79035f306d45ad68c1fe2c075f572a49f435b4c3eee40cdee4b189c9786b" exitCode=0 Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.027551 4801 generic.go:334] "Generic (PLEG): container finished" podID="354befed-1924-4c21-bfd8-ba1aa7423789" containerID="a91f8ec9a8799db727bccd090293662bea71c4587d9b35209ee9e372fcb5c827" exitCode=143 Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.027136 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"354befed-1924-4c21-bfd8-ba1aa7423789","Type":"ContainerDied","Data":"aceb79035f306d45ad68c1fe2c075f572a49f435b4c3eee40cdee4b189c9786b"} Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.027699 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"354befed-1924-4c21-bfd8-ba1aa7423789","Type":"ContainerDied","Data":"a91f8ec9a8799db727bccd090293662bea71c4587d9b35209ee9e372fcb5c827"} Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.035353 4801 generic.go:334] "Generic (PLEG): container finished" podID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerID="505fa8a9a75d4b0fa17d830ebdb902ff9c9fc93c1dcf8f39336ee8cebd3622b3" exitCode=0 Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.035404 4801 generic.go:334] "Generic (PLEG): container finished" podID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerID="2365e42fc0d17ba54bd723263a8888d4ee065f555493c212b35fa4ddb0411234" exitCode=143 Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.035396 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d85bf6a-fcc9-455a-be1b-84ea7be73590","Type":"ContainerDied","Data":"505fa8a9a75d4b0fa17d830ebdb902ff9c9fc93c1dcf8f39336ee8cebd3622b3"} Nov 24 21:28:11 crc kubenswrapper[4801]: I1124 21:28:11.035465 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d85bf6a-fcc9-455a-be1b-84ea7be73590","Type":"ContainerDied","Data":"2365e42fc0d17ba54bd723263a8888d4ee065f555493c212b35fa4ddb0411234"} Nov 24 21:28:12 crc kubenswrapper[4801]: I1124 21:28:12.053782 4801 generic.go:334] "Generic (PLEG): container finished" podID="59fafd61-3dc7-45b9-9285-ca2119d8123a" containerID="1d8d94a461742ebbc55f22e1f10cf2f69c821a5e0e3fe9446d05a4f18d8d494c" exitCode=0 Nov 24 21:28:12 crc kubenswrapper[4801]: I1124 21:28:12.053837 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhvd4" event={"ID":"59fafd61-3dc7-45b9-9285-ca2119d8123a","Type":"ContainerDied","Data":"1d8d94a461742ebbc55f22e1f10cf2f69c821a5e0e3fe9446d05a4f18d8d494c"} Nov 24 21:28:13 crc kubenswrapper[4801]: I1124 21:28:13.917130 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:14 crc kubenswrapper[4801]: I1124 21:28:14.036416 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-k8pj7"] Nov 24 21:28:14 crc kubenswrapper[4801]: I1124 21:28:14.036732 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" containerID="cri-o://24940f0b159ee1aac7122007df958bba9eea49888ac111f7b6e248f6964eac87" gracePeriod=10 Nov 24 21:28:15 crc kubenswrapper[4801]: I1124 21:28:15.102934 4801 generic.go:334] "Generic (PLEG): container finished" podID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerID="24940f0b159ee1aac7122007df958bba9eea49888ac111f7b6e248f6964eac87" exitCode=0 Nov 24 21:28:15 crc kubenswrapper[4801]: I1124 21:28:15.103025 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" event={"ID":"47dfc1ea-2614-472f-8374-4d4955a197b1","Type":"ContainerDied","Data":"24940f0b159ee1aac7122007df958bba9eea49888ac111f7b6e248f6964eac87"} Nov 24 21:28:16 crc kubenswrapper[4801]: I1124 21:28:16.143384 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Nov 24 21:28:21 crc kubenswrapper[4801]: I1124 21:28:21.142678 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Nov 24 21:28:23 crc kubenswrapper[4801]: E1124 21:28:23.277638 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 21:28:23 crc kubenswrapper[4801]: E1124 21:28:23.278608 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68h77h558h6bh54ch55dh9fh54dhb4h586h65ch57chd9h547h5cfh665h56h8dhdhfbh55h5f4hfh6dh599h687h66h5c6h66ch95h594h67dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvdhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9916e08d-472d-47e4-a0ea-d8b67bb8faee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.443140 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.529604 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-fernet-keys\") pod \"59fafd61-3dc7-45b9-9285-ca2119d8123a\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.529814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-combined-ca-bundle\") pod \"59fafd61-3dc7-45b9-9285-ca2119d8123a\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.530023 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-config-data\") pod \"59fafd61-3dc7-45b9-9285-ca2119d8123a\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.530090 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tspfn\" (UniqueName: \"kubernetes.io/projected/59fafd61-3dc7-45b9-9285-ca2119d8123a-kube-api-access-tspfn\") pod \"59fafd61-3dc7-45b9-9285-ca2119d8123a\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.530139 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-scripts\") pod \"59fafd61-3dc7-45b9-9285-ca2119d8123a\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.530216 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-credential-keys\") pod \"59fafd61-3dc7-45b9-9285-ca2119d8123a\" (UID: \"59fafd61-3dc7-45b9-9285-ca2119d8123a\") " Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.539617 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "59fafd61-3dc7-45b9-9285-ca2119d8123a" (UID: "59fafd61-3dc7-45b9-9285-ca2119d8123a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.544935 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-scripts" (OuterVolumeSpecName: "scripts") pod "59fafd61-3dc7-45b9-9285-ca2119d8123a" (UID: "59fafd61-3dc7-45b9-9285-ca2119d8123a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.553283 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fafd61-3dc7-45b9-9285-ca2119d8123a-kube-api-access-tspfn" (OuterVolumeSpecName: "kube-api-access-tspfn") pod "59fafd61-3dc7-45b9-9285-ca2119d8123a" (UID: "59fafd61-3dc7-45b9-9285-ca2119d8123a"). InnerVolumeSpecName "kube-api-access-tspfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.571091 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "59fafd61-3dc7-45b9-9285-ca2119d8123a" (UID: "59fafd61-3dc7-45b9-9285-ca2119d8123a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.575147 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-config-data" (OuterVolumeSpecName: "config-data") pod "59fafd61-3dc7-45b9-9285-ca2119d8123a" (UID: "59fafd61-3dc7-45b9-9285-ca2119d8123a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.592834 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59fafd61-3dc7-45b9-9285-ca2119d8123a" (UID: "59fafd61-3dc7-45b9-9285-ca2119d8123a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.635707 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tspfn\" (UniqueName: \"kubernetes.io/projected/59fafd61-3dc7-45b9-9285-ca2119d8123a-kube-api-access-tspfn\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.635783 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.635800 4801 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.635818 4801 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.635833 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:23 crc kubenswrapper[4801]: I1124 21:28:23.635845 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59fafd61-3dc7-45b9-9285-ca2119d8123a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:23 crc kubenswrapper[4801]: E1124 21:28:23.925143 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 24 21:28:23 crc kubenswrapper[4801]: E1124 21:28:23.925920 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr7sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-x7r2c_openstack(45a6644f-aa22-41cb-bf2a-4930db050d45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:28:23 crc kubenswrapper[4801]: E1124 21:28:23.927503 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-x7r2c" podUID="45a6644f-aa22-41cb-bf2a-4930db050d45" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.227521 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhvd4" event={"ID":"59fafd61-3dc7-45b9-9285-ca2119d8123a","Type":"ContainerDied","Data":"d916b7dbed69648240a56c3d27c5af19912d844dfb1b652c8685b7928b3bc6f2"} Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.227564 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhvd4" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.227598 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d916b7dbed69648240a56c3d27c5af19912d844dfb1b652c8685b7928b3bc6f2" Nov 24 21:28:24 crc kubenswrapper[4801]: E1124 21:28:24.229826 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-x7r2c" podUID="45a6644f-aa22-41cb-bf2a-4930db050d45" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.320573 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.320651 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.567172 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hhvd4"] Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.577609 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hhvd4"] Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.688441 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59fafd61-3dc7-45b9-9285-ca2119d8123a" path="/var/lib/kubelet/pods/59fafd61-3dc7-45b9-9285-ca2119d8123a/volumes" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.689814 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q8zwc"] Nov 24 21:28:24 crc kubenswrapper[4801]: E1124 21:28:24.690791 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fafd61-3dc7-45b9-9285-ca2119d8123a" containerName="keystone-bootstrap" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.690815 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fafd61-3dc7-45b9-9285-ca2119d8123a" containerName="keystone-bootstrap" Nov 24 21:28:24 crc kubenswrapper[4801]: E1124 21:28:24.690881 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89c80c6-04e7-4fc1-aa02-556ad647c322" containerName="init" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.690895 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89c80c6-04e7-4fc1-aa02-556ad647c322" containerName="init" Nov 24 21:28:24 crc kubenswrapper[4801]: E1124 21:28:24.690924 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerName="init" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.690950 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerName="init" Nov 24 21:28:24 crc kubenswrapper[4801]: E1124 21:28:24.690963 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerName="dnsmasq-dns" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.690971 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerName="dnsmasq-dns" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.691352 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89c80c6-04e7-4fc1-aa02-556ad647c322" containerName="init" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.691383 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1709bb4a-ccf5-471d-9019-5bd4691071ad" containerName="dnsmasq-dns" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.691410 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="59fafd61-3dc7-45b9-9285-ca2119d8123a" containerName="keystone-bootstrap" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.692555 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q8zwc"] Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.692723 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.699000 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.699000 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.699266 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.699398 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.699424 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s9sx4" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.762589 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-config-data\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.763197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-combined-ca-bundle\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.763282 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-scripts\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.765218 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-credential-keys\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.765254 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-fernet-keys\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.765311 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hf5\" (UniqueName: \"kubernetes.io/projected/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-kube-api-access-55hf5\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.867828 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-credential-keys\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.867919 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-fernet-keys\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.868007 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hf5\" (UniqueName: \"kubernetes.io/projected/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-kube-api-access-55hf5\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.868106 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-config-data\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.868142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-combined-ca-bundle\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.868184 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-scripts\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.878248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-credential-keys\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.878474 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-config-data\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.879919 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-combined-ca-bundle\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.883908 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-scripts\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.884565 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-fernet-keys\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:24 crc kubenswrapper[4801]: I1124 21:28:24.899527 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55hf5\" (UniqueName: \"kubernetes.io/projected/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-kube-api-access-55hf5\") pod \"keystone-bootstrap-q8zwc\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:25 crc kubenswrapper[4801]: I1124 21:28:25.021698 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:31 crc kubenswrapper[4801]: I1124 21:28:31.143144 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: i/o timeout" Nov 24 21:28:31 crc kubenswrapper[4801]: I1124 21:28:31.144394 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:28:31 crc kubenswrapper[4801]: I1124 21:28:31.326424 4801 generic.go:334] "Generic (PLEG): container finished" podID="94a48374-ecd3-49fb-bddb-d9afa60a4ac6" containerID="b4405df26d211ac6160aed2435615cea115a4c9de09a802e40831646ed7aca74" exitCode=0 Nov 24 21:28:31 crc kubenswrapper[4801]: I1124 21:28:31.326493 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pzzvv" event={"ID":"94a48374-ecd3-49fb-bddb-d9afa60a4ac6","Type":"ContainerDied","Data":"b4405df26d211ac6160aed2435615cea115a4c9de09a802e40831646ed7aca74"} Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.062851 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.063029 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mznbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kmchz_openstack(953c737e-024f-41ba-9544-d1238b75519c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.064236 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kmchz" podUID="953c737e-024f-41ba-9544-d1238b75519c" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.234270 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.240882 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.246049 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.366784 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d85bf6a-fcc9-455a-be1b-84ea7be73590","Type":"ContainerDied","Data":"a5e64dab1ede223e3294600c14c59b627839ea475ac9fa41b63c181a03d351b6"} Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.366861 4801 scope.go:117] "RemoveContainer" containerID="505fa8a9a75d4b0fa17d830ebdb902ff9c9fc93c1dcf8f39336ee8cebd3622b3" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.367091 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.380682 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" event={"ID":"47dfc1ea-2614-472f-8374-4d4955a197b1","Type":"ContainerDied","Data":"b6e6b33da0eb9ff695566af44d761306a7a5c05f63f0f4c672a2f8febf18b111"} Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.380724 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.387986 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"354befed-1924-4c21-bfd8-ba1aa7423789","Type":"ContainerDied","Data":"023f6aba59e2c04b0b6a2bd72ce5617f3821569a86b4e5b5d1d20f4544fa96c1"} Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.388120 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.389238 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-kmchz" podUID="953c737e-024f-41ba-9544-d1238b75519c" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413686 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-config-data\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413730 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-internal-tls-certs\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413768 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-scripts\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413790 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-combined-ca-bundle\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413861 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndbfn\" (UniqueName: \"kubernetes.io/projected/1d85bf6a-fcc9-455a-be1b-84ea7be73590-kube-api-access-ndbfn\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413914 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-scripts\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413943 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-sb\") pod \"47dfc1ea-2614-472f-8374-4d4955a197b1\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.413971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-public-tls-certs\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414031 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-config-data\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414088 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-httpd-run\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414122 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-combined-ca-bundle\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414142 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-logs\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414197 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414214 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414282 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-httpd-run\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414314 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzrcf\" (UniqueName: \"kubernetes.io/projected/354befed-1924-4c21-bfd8-ba1aa7423789-kube-api-access-wzrcf\") pod \"354befed-1924-4c21-bfd8-ba1aa7423789\" (UID: \"354befed-1924-4c21-bfd8-ba1aa7423789\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414347 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gz5c\" (UniqueName: \"kubernetes.io/projected/47dfc1ea-2614-472f-8374-4d4955a197b1-kube-api-access-7gz5c\") pod \"47dfc1ea-2614-472f-8374-4d4955a197b1\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414404 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-logs\") pod \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\" (UID: \"1d85bf6a-fcc9-455a-be1b-84ea7be73590\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414444 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-swift-storage-0\") pod \"47dfc1ea-2614-472f-8374-4d4955a197b1\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414495 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-nb\") pod \"47dfc1ea-2614-472f-8374-4d4955a197b1\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414531 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-svc\") pod \"47dfc1ea-2614-472f-8374-4d4955a197b1\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.414578 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-config\") pod \"47dfc1ea-2614-472f-8374-4d4955a197b1\" (UID: \"47dfc1ea-2614-472f-8374-4d4955a197b1\") " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.415698 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-logs" (OuterVolumeSpecName: "logs") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.419482 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.421770 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47dfc1ea-2614-472f-8374-4d4955a197b1-kube-api-access-7gz5c" (OuterVolumeSpecName: "kube-api-access-7gz5c") pod "47dfc1ea-2614-472f-8374-4d4955a197b1" (UID: "47dfc1ea-2614-472f-8374-4d4955a197b1"). InnerVolumeSpecName "kube-api-access-7gz5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.421831 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-logs" (OuterVolumeSpecName: "logs") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.422210 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.423915 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.424245 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.426731 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354befed-1924-4c21-bfd8-ba1aa7423789-kube-api-access-wzrcf" (OuterVolumeSpecName: "kube-api-access-wzrcf") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "kube-api-access-wzrcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.429681 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d85bf6a-fcc9-455a-be1b-84ea7be73590-kube-api-access-ndbfn" (OuterVolumeSpecName: "kube-api-access-ndbfn") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "kube-api-access-ndbfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.431037 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-scripts" (OuterVolumeSpecName: "scripts") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.457727 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-scripts" (OuterVolumeSpecName: "scripts") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.470772 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517753 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517796 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354befed-1924-4c21-bfd8-ba1aa7423789-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517826 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517843 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517856 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517870 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzrcf\" (UniqueName: \"kubernetes.io/projected/354befed-1924-4c21-bfd8-ba1aa7423789-kube-api-access-wzrcf\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517883 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gz5c\" (UniqueName: \"kubernetes.io/projected/47dfc1ea-2614-472f-8374-4d4955a197b1-kube-api-access-7gz5c\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517897 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d85bf6a-fcc9-455a-be1b-84ea7be73590-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517910 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517922 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517935 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndbfn\" (UniqueName: \"kubernetes.io/projected/1d85bf6a-fcc9-455a-be1b-84ea7be73590-kube-api-access-ndbfn\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.517947 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.537342 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.552651 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.552785 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-config" (OuterVolumeSpecName: "config") pod "47dfc1ea-2614-472f-8374-4d4955a197b1" (UID: "47dfc1ea-2614-472f-8374-4d4955a197b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.559111 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47dfc1ea-2614-472f-8374-4d4955a197b1" (UID: "47dfc1ea-2614-472f-8374-4d4955a197b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.567070 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47dfc1ea-2614-472f-8374-4d4955a197b1" (UID: "47dfc1ea-2614-472f-8374-4d4955a197b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.570735 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.570789 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-config-data" (OuterVolumeSpecName: "config-data") pod "354befed-1924-4c21-bfd8-ba1aa7423789" (UID: "354befed-1924-4c21-bfd8-ba1aa7423789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.583030 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47dfc1ea-2614-472f-8374-4d4955a197b1" (UID: "47dfc1ea-2614-472f-8374-4d4955a197b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.584680 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.589870 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.590402 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-config-data" (OuterVolumeSpecName: "config-data") pod "1d85bf6a-fcc9-455a-be1b-84ea7be73590" (UID: "1d85bf6a-fcc9-455a-be1b-84ea7be73590"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.594175 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47dfc1ea-2614-472f-8374-4d4955a197b1" (UID: "47dfc1ea-2614-472f-8374-4d4955a197b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621387 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621444 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621461 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621474 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621520 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621537 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621554 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47dfc1ea-2614-472f-8374-4d4955a197b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621568 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/354befed-1924-4c21-bfd8-ba1aa7423789-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621579 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621590 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85bf6a-fcc9-455a-be1b-84ea7be73590-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621602 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.621614 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.739382 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.757031 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.786567 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.787227 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-httpd" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787295 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-httpd" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.787326 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-log" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787335 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-log" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.787377 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-log" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787385 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-log" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.787420 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-httpd" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787428 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-httpd" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.787440 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="init" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787447 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="init" Nov 24 21:28:32 crc kubenswrapper[4801]: E1124 21:28:32.787458 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787464 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787692 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-log" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787715 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-log" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787724 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" containerName="glance-httpd" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787737 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.787749 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" containerName="glance-httpd" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.789235 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.792713 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-68kgz" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.792970 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.793156 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.795719 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.805175 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-k8pj7"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.817298 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-k8pj7"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.840980 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-logs\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841140 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kklwj\" (UniqueName: \"kubernetes.io/projected/447770ad-de60-4323-95a2-1c530eff1089-kube-api-access-kklwj\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841316 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841439 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-config-data\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841484 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841724 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.841942 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-scripts\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.851750 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.862550 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.870910 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.879182 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.882192 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.885418 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.885576 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.888325 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944339 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-scripts\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944441 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-logs\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944498 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kklwj\" (UniqueName: \"kubernetes.io/projected/447770ad-de60-4323-95a2-1c530eff1089-kube-api-access-kklwj\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944565 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944606 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-config-data\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944630 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.944713 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.945000 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.945257 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-logs\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.946103 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.949251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-config-data\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.951774 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-scripts\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.952650 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.962923 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.964583 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kklwj\" (UniqueName: \"kubernetes.io/projected/447770ad-de60-4323-95a2-1c530eff1089-kube-api-access-kklwj\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:32 crc kubenswrapper[4801]: I1124 21:28:32.989413 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047281 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lz9\" (UniqueName: \"kubernetes.io/projected/32333c22-5214-46fe-a77e-9268c3fda5a4-kube-api-access-n7lz9\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047359 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047445 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-logs\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047464 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047484 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047516 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047550 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.047652 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.115157 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.149809 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.150234 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.150683 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-logs\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.150834 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.150874 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.150912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.150935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.151072 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.151101 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7lz9\" (UniqueName: \"kubernetes.io/projected/32333c22-5214-46fe-a77e-9268c3fda5a4-kube-api-access-n7lz9\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.151606 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-logs\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.151967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.155477 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.156312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.157310 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.159522 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.171047 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7lz9\" (UniqueName: \"kubernetes.io/projected/32333c22-5214-46fe-a77e-9268c3fda5a4-kube-api-access-n7lz9\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.199590 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.499786 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:28:33 crc kubenswrapper[4801]: E1124 21:28:33.961305 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 21:28:33 crc kubenswrapper[4801]: E1124 21:28:33.961579 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7wjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ccwmw_openstack(3f3b859c-0916-4b01-a41f-0b9fd4d8b204): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:28:33 crc kubenswrapper[4801]: E1124 21:28:33.962910 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ccwmw" podUID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" Nov 24 21:28:33 crc kubenswrapper[4801]: I1124 21:28:33.991892 4801 scope.go:117] "RemoveContainer" containerID="2365e42fc0d17ba54bd723263a8888d4ee065f555493c212b35fa4ddb0411234" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.136555 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.282322 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-combined-ca-bundle\") pod \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.282503 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-config\") pod \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.282805 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfp6x\" (UniqueName: \"kubernetes.io/projected/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-kube-api-access-dfp6x\") pod \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\" (UID: \"94a48374-ecd3-49fb-bddb-d9afa60a4ac6\") " Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.297687 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-kube-api-access-dfp6x" (OuterVolumeSpecName: "kube-api-access-dfp6x") pod "94a48374-ecd3-49fb-bddb-d9afa60a4ac6" (UID: "94a48374-ecd3-49fb-bddb-d9afa60a4ac6"). InnerVolumeSpecName "kube-api-access-dfp6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.322184 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a48374-ecd3-49fb-bddb-d9afa60a4ac6" (UID: "94a48374-ecd3-49fb-bddb-d9afa60a4ac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.324259 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-config" (OuterVolumeSpecName: "config") pod "94a48374-ecd3-49fb-bddb-d9afa60a4ac6" (UID: "94a48374-ecd3-49fb-bddb-d9afa60a4ac6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.386437 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfp6x\" (UniqueName: \"kubernetes.io/projected/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-kube-api-access-dfp6x\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.386477 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.386487 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/94a48374-ecd3-49fb-bddb-d9afa60a4ac6-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.411324 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pzzvv" event={"ID":"94a48374-ecd3-49fb-bddb-d9afa60a4ac6","Type":"ContainerDied","Data":"7af319d3847467777e996d26f8b2f6a5ea188ecd98d21aeaca0b4909b1d5d9ab"} Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.411392 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af319d3847467777e996d26f8b2f6a5ea188ecd98d21aeaca0b4909b1d5d9ab" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.411405 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pzzvv" Nov 24 21:28:34 crc kubenswrapper[4801]: E1124 21:28:34.416202 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-ccwmw" podUID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.695737 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d85bf6a-fcc9-455a-be1b-84ea7be73590" path="/var/lib/kubelet/pods/1d85bf6a-fcc9-455a-be1b-84ea7be73590/volumes" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.697149 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354befed-1924-4c21-bfd8-ba1aa7423789" path="/var/lib/kubelet/pods/354befed-1924-4c21-bfd8-ba1aa7423789/volumes" Nov 24 21:28:34 crc kubenswrapper[4801]: I1124 21:28:34.698710 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" path="/var/lib/kubelet/pods/47dfc1ea-2614-472f-8374-4d4955a197b1/volumes" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.417472 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dpxj5"] Nov 24 21:28:35 crc kubenswrapper[4801]: E1124 21:28:35.418331 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a48374-ecd3-49fb-bddb-d9afa60a4ac6" containerName="neutron-db-sync" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.418351 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a48374-ecd3-49fb-bddb-d9afa60a4ac6" containerName="neutron-db-sync" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.420928 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a48374-ecd3-49fb-bddb-d9afa60a4ac6" containerName="neutron-db-sync" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.422311 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.460922 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dpxj5"] Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.499023 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6996d87ddd-ph957"] Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.501558 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.506896 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.507216 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sjm6h" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.507527 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.507689 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.511896 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6996d87ddd-ph957"] Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.524200 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.524338 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-config\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.524406 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6pr6\" (UniqueName: \"kubernetes.io/projected/310fde06-ed0f-4b6d-b928-03b14918aef7-kube-api-access-b6pr6\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.524435 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.524561 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.524629 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.627440 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.627564 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-config\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.627604 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-config\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.627630 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6pr6\" (UniqueName: \"kubernetes.io/projected/310fde06-ed0f-4b6d-b928-03b14918aef7-kube-api-access-b6pr6\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.627655 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.627847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-httpd-config\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628073 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628234 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-combined-ca-bundle\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628426 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628497 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsd49\" (UniqueName: \"kubernetes.io/projected/4e906613-b24c-4ee1-8b87-b8a7d7d20871-kube-api-access-xsd49\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628611 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-ovndb-tls-certs\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628621 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-config\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628780 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.628884 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.629253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.629656 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.648866 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6pr6\" (UniqueName: \"kubernetes.io/projected/310fde06-ed0f-4b6d-b928-03b14918aef7-kube-api-access-b6pr6\") pod \"dnsmasq-dns-55f844cf75-dpxj5\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.731448 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-httpd-config\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.731541 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-combined-ca-bundle\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.731583 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsd49\" (UniqueName: \"kubernetes.io/projected/4e906613-b24c-4ee1-8b87-b8a7d7d20871-kube-api-access-xsd49\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.731619 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-ovndb-tls-certs\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.731714 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-config\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.740626 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-config\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.744153 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-httpd-config\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.744435 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-combined-ca-bundle\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.745489 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-ovndb-tls-certs\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.750598 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsd49\" (UniqueName: \"kubernetes.io/projected/4e906613-b24c-4ee1-8b87-b8a7d7d20871-kube-api-access-xsd49\") pod \"neutron-6996d87ddd-ph957\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.771631 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:35 crc kubenswrapper[4801]: I1124 21:28:35.842863 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:36 crc kubenswrapper[4801]: I1124 21:28:36.031809 4801 scope.go:117] "RemoveContainer" containerID="24940f0b159ee1aac7122007df958bba9eea49888ac111f7b6e248f6964eac87" Nov 24 21:28:36 crc kubenswrapper[4801]: I1124 21:28:36.145248 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-k8pj7" podUID="47dfc1ea-2614-472f-8374-4d4955a197b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: i/o timeout" Nov 24 21:28:36 crc kubenswrapper[4801]: I1124 21:28:36.266502 4801 scope.go:117] "RemoveContainer" containerID="87808e295194b592698b8d2de3982bd9d3e6a85778654da727f6264302e64bd5" Nov 24 21:28:36 crc kubenswrapper[4801]: I1124 21:28:36.404046 4801 scope.go:117] "RemoveContainer" containerID="aceb79035f306d45ad68c1fe2c075f572a49f435b4c3eee40cdee4b189c9786b" Nov 24 21:28:36 crc kubenswrapper[4801]: I1124 21:28:36.479084 4801 scope.go:117] "RemoveContainer" containerID="a91f8ec9a8799db727bccd090293662bea71c4587d9b35209ee9e372fcb5c827" Nov 24 21:28:36 crc kubenswrapper[4801]: I1124 21:28:36.998332 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.012909 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q8zwc"] Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.044699 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.153041 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dpxj5"] Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.435132 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6996d87ddd-ph957"] Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.566147 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"447770ad-de60-4323-95a2-1c530eff1089","Type":"ContainerStarted","Data":"0bbc7f6088f30626c5335f85ef7b623f3f95228024d0fabad0808acbb83f5a09"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.570125 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8zwc" event={"ID":"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5","Type":"ContainerStarted","Data":"60cbb024dccdf49d1b0f29f479ba4d1340fcb5e25d1525166462eb04da510d06"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.570158 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8zwc" event={"ID":"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5","Type":"ContainerStarted","Data":"ff5fed57e88caf2d6b63700fb0f199da1bf117b4f4028f6f0bb3d21c0e1856b7"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.574300 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m42cd" event={"ID":"c5d93222-44c6-4113-92f2-8a320aefbd82","Type":"ContainerStarted","Data":"60128802e180a48f9e9316805a6bd0957198ae71639de8e371a835d8cadcb352"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.585033 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" event={"ID":"310fde06-ed0f-4b6d-b928-03b14918aef7","Type":"ContainerStarted","Data":"8f5303990397a22bc6e0e66284d16f9854b9fe2973bb7d4ae0e2cd5fb8d98d45"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.588301 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q8zwc" podStartSLOduration=13.588287459 podStartE2EDuration="13.588287459s" podCreationTimestamp="2025-11-24 21:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:37.584336419 +0000 UTC m=+1289.666923089" watchObservedRunningTime="2025-11-24 21:28:37.588287459 +0000 UTC m=+1289.670874119" Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.594506 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerStarted","Data":"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.600581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32333c22-5214-46fe-a77e-9268c3fda5a4","Type":"ContainerStarted","Data":"c9fe6f02f3f2f2e1d39bd5aa2e0919616526929c0370d621a700a688acc33909"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.616751 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-m42cd" podStartSLOduration=7.691703626 podStartE2EDuration="34.616719594s" podCreationTimestamp="2025-11-24 21:28:03 +0000 UTC" firstStartedPulling="2025-11-24 21:28:05.145717346 +0000 UTC m=+1257.228304006" lastFinishedPulling="2025-11-24 21:28:32.070733304 +0000 UTC m=+1284.153319974" observedRunningTime="2025-11-24 21:28:37.603669861 +0000 UTC m=+1289.686256531" watchObservedRunningTime="2025-11-24 21:28:37.616719594 +0000 UTC m=+1289.699306264" Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.645710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6996d87ddd-ph957" event={"ID":"4e906613-b24c-4ee1-8b87-b8a7d7d20871","Type":"ContainerStarted","Data":"5fee1ec9a9ee7cede16a7f56c7854b4e9debe40a2ce9e5e0186c71a1d0599e88"} Nov 24 21:28:37 crc kubenswrapper[4801]: I1124 21:28:37.674877 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x7r2c" event={"ID":"45a6644f-aa22-41cb-bf2a-4930db050d45","Type":"ContainerStarted","Data":"b2f03bb73d4dfcb3ce5f4873cee08048747d7e9938e2a2f9062640084fb41d71"} Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.707074 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-x7r2c" podStartSLOduration=5.084013637 podStartE2EDuration="36.707054075s" podCreationTimestamp="2025-11-24 21:28:02 +0000 UTC" firstStartedPulling="2025-11-24 21:28:04.781457965 +0000 UTC m=+1256.864044635" lastFinishedPulling="2025-11-24 21:28:36.404498403 +0000 UTC m=+1288.487085073" observedRunningTime="2025-11-24 21:28:37.705632803 +0000 UTC m=+1289.788219473" watchObservedRunningTime="2025-11-24 21:28:38.707054075 +0000 UTC m=+1290.789640735" Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.714925 4801 generic.go:334] "Generic (PLEG): container finished" podID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerID="dc6db00e6050a22412b5c29514bd2ad55ad24c3e539ff6dc8f12237872fe4f0c" exitCode=0 Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.714986 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" event={"ID":"310fde06-ed0f-4b6d-b928-03b14918aef7","Type":"ContainerDied","Data":"dc6db00e6050a22412b5c29514bd2ad55ad24c3e539ff6dc8f12237872fe4f0c"} Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.728907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32333c22-5214-46fe-a77e-9268c3fda5a4","Type":"ContainerStarted","Data":"1e34f51e8f2b9e9b7d4073db7fee2c72004217cb791ecf489730c3da31b225a2"} Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.758221 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"447770ad-de60-4323-95a2-1c530eff1089","Type":"ContainerStarted","Data":"b90475abb82f1fd03eb3817683c40ba0f834979270b39d32fc43edec03c7d176"} Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.770143 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6996d87ddd-ph957" event={"ID":"4e906613-b24c-4ee1-8b87-b8a7d7d20871","Type":"ContainerStarted","Data":"c1290c152ff34be279b8ba314f8f4364edf48a522b440d9e8c251d1f6d2bd460"} Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.770193 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6996d87ddd-ph957" event={"ID":"4e906613-b24c-4ee1-8b87-b8a7d7d20871","Type":"ContainerStarted","Data":"9929003925df02c04b10bb5ba2a7aabe654756684512ad20c8328d14bccc5544"} Nov 24 21:28:38 crc kubenswrapper[4801]: I1124 21:28:38.770217 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.014218 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6996d87ddd-ph957" podStartSLOduration=4.014193055 podStartE2EDuration="4.014193055s" podCreationTimestamp="2025-11-24 21:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:38.886428417 +0000 UTC m=+1290.969015087" watchObservedRunningTime="2025-11-24 21:28:39.014193055 +0000 UTC m=+1291.096779725" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.650243 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f8c5d485-f945g"] Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.653785 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.657515 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.657689 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.669644 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f8c5d485-f945g"] Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.758439 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-ovndb-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.759825 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-internal-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.759948 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-httpd-config\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.760157 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-config\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.760303 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg22j\" (UniqueName: \"kubernetes.io/projected/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-kube-api-access-cg22j\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.760447 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-combined-ca-bundle\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.760484 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-public-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.820505 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"447770ad-de60-4323-95a2-1c530eff1089","Type":"ContainerStarted","Data":"ad2e57a16ec696a20855e2e6ea2cf1f6b90322c3a418fa33ea5d564570bcf1b0"} Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.828920 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" event={"ID":"310fde06-ed0f-4b6d-b928-03b14918aef7","Type":"ContainerStarted","Data":"b26004a0a5144a889fc27f7359107cfd53dad877f301cf1447af946b7fc04aef"} Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.829387 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.837988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32333c22-5214-46fe-a77e-9268c3fda5a4","Type":"ContainerStarted","Data":"51897b962c56e28aed948da93494e0ae0aad5c8b6f44ae4e813d7caa6293e935"} Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.858542 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.858514746 podStartE2EDuration="7.858514746s" podCreationTimestamp="2025-11-24 21:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:39.848319648 +0000 UTC m=+1291.930906328" watchObservedRunningTime="2025-11-24 21:28:39.858514746 +0000 UTC m=+1291.941101416" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863228 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-ovndb-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863287 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-internal-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863319 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-httpd-config\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-config\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863475 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg22j\" (UniqueName: \"kubernetes.io/projected/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-kube-api-access-cg22j\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-combined-ca-bundle\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.863550 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-public-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.872941 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-combined-ca-bundle\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.874463 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-ovndb-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.875096 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-httpd-config\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.875966 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-public-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.877759 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-internal-tls-certs\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.878942 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-config\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.893985 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg22j\" (UniqueName: \"kubernetes.io/projected/1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43-kube-api-access-cg22j\") pod \"neutron-5f8c5d485-f945g\" (UID: \"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43\") " pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.899051 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.899023855 podStartE2EDuration="7.899023855s" podCreationTimestamp="2025-11-24 21:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:39.890388575 +0000 UTC m=+1291.972975255" watchObservedRunningTime="2025-11-24 21:28:39.899023855 +0000 UTC m=+1291.981610515" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.927811 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" podStartSLOduration=4.927790042 podStartE2EDuration="4.927790042s" podCreationTimestamp="2025-11-24 21:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:39.924781451 +0000 UTC m=+1292.007368121" watchObservedRunningTime="2025-11-24 21:28:39.927790042 +0000 UTC m=+1292.010376712" Nov 24 21:28:39 crc kubenswrapper[4801]: I1124 21:28:39.980156 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:40 crc kubenswrapper[4801]: I1124 21:28:40.684524 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f8c5d485-f945g"] Nov 24 21:28:40 crc kubenswrapper[4801]: I1124 21:28:40.856300 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f8c5d485-f945g" event={"ID":"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43","Type":"ContainerStarted","Data":"45478e8028fca0109199c6c90afa863bc11c5cd90aaf82fcf339d4252c41ceb6"} Nov 24 21:28:40 crc kubenswrapper[4801]: I1124 21:28:40.860211 4801 generic.go:334] "Generic (PLEG): container finished" podID="c5d93222-44c6-4113-92f2-8a320aefbd82" containerID="60128802e180a48f9e9316805a6bd0957198ae71639de8e371a835d8cadcb352" exitCode=0 Nov 24 21:28:40 crc kubenswrapper[4801]: I1124 21:28:40.860335 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m42cd" event={"ID":"c5d93222-44c6-4113-92f2-8a320aefbd82","Type":"ContainerDied","Data":"60128802e180a48f9e9316805a6bd0957198ae71639de8e371a835d8cadcb352"} Nov 24 21:28:41 crc kubenswrapper[4801]: I1124 21:28:41.876212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f8c5d485-f945g" event={"ID":"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43","Type":"ContainerStarted","Data":"d5e6c425eda076aeb7e27b854b9c5febdf28ec56036fbcd438671ad661e90d2f"} Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.115595 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.116081 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.170623 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.175285 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.500678 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.500742 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.556105 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.590779 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.906211 4801 generic.go:334] "Generic (PLEG): container finished" podID="6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" containerID="60cbb024dccdf49d1b0f29f479ba4d1340fcb5e25d1525166462eb04da510d06" exitCode=0 Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.906286 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8zwc" event={"ID":"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5","Type":"ContainerDied","Data":"60cbb024dccdf49d1b0f29f479ba4d1340fcb5e25d1525166462eb04da510d06"} Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.907647 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.907684 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.907699 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:43 crc kubenswrapper[4801]: I1124 21:28:43.907711 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.644340 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.817596 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d93222-44c6-4113-92f2-8a320aefbd82-logs\") pod \"c5d93222-44c6-4113-92f2-8a320aefbd82\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.817964 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d93222-44c6-4113-92f2-8a320aefbd82-logs" (OuterVolumeSpecName: "logs") pod "c5d93222-44c6-4113-92f2-8a320aefbd82" (UID: "c5d93222-44c6-4113-92f2-8a320aefbd82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.818051 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-config-data\") pod \"c5d93222-44c6-4113-92f2-8a320aefbd82\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.818133 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-scripts\") pod \"c5d93222-44c6-4113-92f2-8a320aefbd82\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.818233 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-combined-ca-bundle\") pod \"c5d93222-44c6-4113-92f2-8a320aefbd82\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.818274 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5lkp\" (UniqueName: \"kubernetes.io/projected/c5d93222-44c6-4113-92f2-8a320aefbd82-kube-api-access-h5lkp\") pod \"c5d93222-44c6-4113-92f2-8a320aefbd82\" (UID: \"c5d93222-44c6-4113-92f2-8a320aefbd82\") " Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.819027 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d93222-44c6-4113-92f2-8a320aefbd82-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.824522 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d93222-44c6-4113-92f2-8a320aefbd82-kube-api-access-h5lkp" (OuterVolumeSpecName: "kube-api-access-h5lkp") pod "c5d93222-44c6-4113-92f2-8a320aefbd82" (UID: "c5d93222-44c6-4113-92f2-8a320aefbd82"). InnerVolumeSpecName "kube-api-access-h5lkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.831713 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-scripts" (OuterVolumeSpecName: "scripts") pod "c5d93222-44c6-4113-92f2-8a320aefbd82" (UID: "c5d93222-44c6-4113-92f2-8a320aefbd82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.856221 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-config-data" (OuterVolumeSpecName: "config-data") pod "c5d93222-44c6-4113-92f2-8a320aefbd82" (UID: "c5d93222-44c6-4113-92f2-8a320aefbd82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.860327 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5d93222-44c6-4113-92f2-8a320aefbd82" (UID: "c5d93222-44c6-4113-92f2-8a320aefbd82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.921191 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m42cd" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.921255 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m42cd" event={"ID":"c5d93222-44c6-4113-92f2-8a320aefbd82","Type":"ContainerDied","Data":"0ab8451cb7122cd2595704b32574b50cb72cb86d57312cb9854d096486105b61"} Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.921324 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8451cb7122cd2595704b32574b50cb72cb86d57312cb9854d096486105b61" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.922834 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.923035 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5lkp\" (UniqueName: \"kubernetes.io/projected/c5d93222-44c6-4113-92f2-8a320aefbd82-kube-api-access-h5lkp\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.923104 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.923136 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d93222-44c6-4113-92f2-8a320aefbd82-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.929800 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerStarted","Data":"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7"} Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.932703 4801 generic.go:334] "Generic (PLEG): container finished" podID="45a6644f-aa22-41cb-bf2a-4930db050d45" containerID="b2f03bb73d4dfcb3ce5f4873cee08048747d7e9938e2a2f9062640084fb41d71" exitCode=0 Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.932803 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x7r2c" event={"ID":"45a6644f-aa22-41cb-bf2a-4930db050d45","Type":"ContainerDied","Data":"b2f03bb73d4dfcb3ce5f4873cee08048747d7e9938e2a2f9062640084fb41d71"} Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.949407 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f8c5d485-f945g" event={"ID":"1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43","Type":"ContainerStarted","Data":"5d67c50b9139b9eba52bb656306e24be0a66c8d042654677f17a2e54bd6bd989"} Nov 24 21:28:44 crc kubenswrapper[4801]: I1124 21:28:44.999606 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f8c5d485-f945g" podStartSLOduration=5.999567869 podStartE2EDuration="5.999567869s" podCreationTimestamp="2025-11-24 21:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:44.985908608 +0000 UTC m=+1297.068495278" watchObservedRunningTime="2025-11-24 21:28:44.999567869 +0000 UTC m=+1297.082154539" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.351443 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.550413 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-credential-keys\") pod \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.550723 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-config-data\") pod \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.550916 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-combined-ca-bundle\") pod \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.550966 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-fernet-keys\") pod \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.551026 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-scripts\") pod \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.551083 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55hf5\" (UniqueName: \"kubernetes.io/projected/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-kube-api-access-55hf5\") pod \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\" (UID: \"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5\") " Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.558747 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" (UID: "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.559871 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-scripts" (OuterVolumeSpecName: "scripts") pod "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" (UID: "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.560044 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-kube-api-access-55hf5" (OuterVolumeSpecName: "kube-api-access-55hf5") pod "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" (UID: "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5"). InnerVolumeSpecName "kube-api-access-55hf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.562451 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" (UID: "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.590272 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" (UID: "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.593871 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-config-data" (OuterVolumeSpecName: "config-data") pod "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" (UID: "6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.656548 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.656589 4801 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.656600 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.656611 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55hf5\" (UniqueName: \"kubernetes.io/projected/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-kube-api-access-55hf5\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.656625 4801 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.656633 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.775449 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66f87bb5dd-vfs99"] Nov 24 21:28:45 crc kubenswrapper[4801]: E1124 21:28:45.776650 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" containerName="keystone-bootstrap" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.776676 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" containerName="keystone-bootstrap" Nov 24 21:28:45 crc kubenswrapper[4801]: E1124 21:28:45.776719 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d93222-44c6-4113-92f2-8a320aefbd82" containerName="placement-db-sync" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.776728 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d93222-44c6-4113-92f2-8a320aefbd82" containerName="placement-db-sync" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.777002 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d93222-44c6-4113-92f2-8a320aefbd82" containerName="placement-db-sync" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.777031 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" containerName="keystone-bootstrap" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.778509 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.779285 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.789024 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.789194 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.789384 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.789465 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9lsvs" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.790435 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.821836 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66f87bb5dd-vfs99"] Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.937055 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jf5b6"] Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.941550 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerName="dnsmasq-dns" containerID="cri-o://f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a" gracePeriod=10 Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.982453 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q8zwc" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.982338 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q8zwc" event={"ID":"6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5","Type":"ContainerDied","Data":"ff5fed57e88caf2d6b63700fb0f199da1bf117b4f4028f6f0bb3d21c0e1856b7"} Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.982544 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5fed57e88caf2d6b63700fb0f199da1bf117b4f4028f6f0bb3d21c0e1856b7" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.982605 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.983772 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-combined-ca-bundle\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.983907 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/142f97c2-64f1-455a-bd07-ed9d5f9ab466-logs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.984105 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-public-tls-certs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.984151 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-config-data\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.984171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-internal-tls-certs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.984604 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-scripts\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:45 crc kubenswrapper[4801]: I1124 21:28:45.984804 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8hb\" (UniqueName: \"kubernetes.io/projected/142f97c2-64f1-455a-bd07-ed9d5f9ab466-kube-api-access-ws8hb\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.088805 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-scripts\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.088898 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws8hb\" (UniqueName: \"kubernetes.io/projected/142f97c2-64f1-455a-bd07-ed9d5f9ab466-kube-api-access-ws8hb\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.088952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-combined-ca-bundle\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.088994 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/142f97c2-64f1-455a-bd07-ed9d5f9ab466-logs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.089051 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-public-tls-certs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.089077 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-config-data\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.089093 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-internal-tls-certs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.093609 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c567b6958-cngc5"] Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.098161 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.110951 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c567b6958-cngc5"] Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.118416 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.118679 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-combined-ca-bundle\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.118974 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.119158 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.119267 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.119398 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.119570 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s9sx4" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.121960 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/142f97c2-64f1-455a-bd07-ed9d5f9ab466-logs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.122350 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-scripts\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.130392 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-internal-tls-certs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.139586 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws8hb\" (UniqueName: \"kubernetes.io/projected/142f97c2-64f1-455a-bd07-ed9d5f9ab466-kube-api-access-ws8hb\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.139719 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-public-tls-certs\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.140468 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142f97c2-64f1-455a-bd07-ed9d5f9ab466-config-data\") pod \"placement-66f87bb5dd-vfs99\" (UID: \"142f97c2-64f1-455a-bd07-ed9d5f9ab466\") " pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.194482 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-combined-ca-bundle\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.194603 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-public-tls-certs\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.194655 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-fernet-keys\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.194748 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-scripts\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.195024 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-internal-tls-certs\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.195057 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-credential-keys\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.195107 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-config-data\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.195133 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48tj\" (UniqueName: \"kubernetes.io/projected/a6a2894c-3c0d-442a-ab62-31748e315cbe-kube-api-access-n48tj\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: E1124 21:28:46.234331 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d5bd12_e735_4141_a9ee_8ecd83139445.slice/crio-conmon-f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ec51489_4b2b_4708_85ca_e1c6d9f2fcc5.slice/crio-ff5fed57e88caf2d6b63700fb0f199da1bf117b4f4028f6f0bb3d21c0e1856b7\": RecentStats: unable to find data in memory cache]" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.299179 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-internal-tls-certs\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.299260 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-credential-keys\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.299656 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-config-data\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.299889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48tj\" (UniqueName: \"kubernetes.io/projected/a6a2894c-3c0d-442a-ab62-31748e315cbe-kube-api-access-n48tj\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.300534 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-combined-ca-bundle\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.300594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-public-tls-certs\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.300660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-fernet-keys\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.300726 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-scripts\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.305751 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-combined-ca-bundle\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.306116 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-credential-keys\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.306321 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-public-tls-certs\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.306779 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-config-data\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.307264 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-internal-tls-certs\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.309345 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-fernet-keys\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.317415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a2894c-3c0d-442a-ab62-31748e315cbe-scripts\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.318583 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48tj\" (UniqueName: \"kubernetes.io/projected/a6a2894c-3c0d-442a-ab62-31748e315cbe-kube-api-access-n48tj\") pod \"keystone-6c567b6958-cngc5\" (UID: \"a6a2894c-3c0d-442a-ab62-31748e315cbe\") " pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.435974 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.538354 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.825853 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.872714 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.952676 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-nb\") pod \"08d5bd12-e735-4141-a9ee-8ecd83139445\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.952830 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-config\") pod \"08d5bd12-e735-4141-a9ee-8ecd83139445\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.952962 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-config-data\") pod \"45a6644f-aa22-41cb-bf2a-4930db050d45\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.953075 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-swift-storage-0\") pod \"08d5bd12-e735-4141-a9ee-8ecd83139445\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.953254 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-combined-ca-bundle\") pod \"45a6644f-aa22-41cb-bf2a-4930db050d45\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.953332 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b99r6\" (UniqueName: \"kubernetes.io/projected/08d5bd12-e735-4141-a9ee-8ecd83139445-kube-api-access-b99r6\") pod \"08d5bd12-e735-4141-a9ee-8ecd83139445\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.953435 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-svc\") pod \"08d5bd12-e735-4141-a9ee-8ecd83139445\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.953571 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr7sk\" (UniqueName: \"kubernetes.io/projected/45a6644f-aa22-41cb-bf2a-4930db050d45-kube-api-access-mr7sk\") pod \"45a6644f-aa22-41cb-bf2a-4930db050d45\" (UID: \"45a6644f-aa22-41cb-bf2a-4930db050d45\") " Nov 24 21:28:46 crc kubenswrapper[4801]: I1124 21:28:46.953654 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-sb\") pod \"08d5bd12-e735-4141-a9ee-8ecd83139445\" (UID: \"08d5bd12-e735-4141-a9ee-8ecd83139445\") " Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.002015 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a6644f-aa22-41cb-bf2a-4930db050d45-kube-api-access-mr7sk" (OuterVolumeSpecName: "kube-api-access-mr7sk") pod "45a6644f-aa22-41cb-bf2a-4930db050d45" (UID: "45a6644f-aa22-41cb-bf2a-4930db050d45"). InnerVolumeSpecName "kube-api-access-mr7sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.013327 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d5bd12-e735-4141-a9ee-8ecd83139445-kube-api-access-b99r6" (OuterVolumeSpecName: "kube-api-access-b99r6") pod "08d5bd12-e735-4141-a9ee-8ecd83139445" (UID: "08d5bd12-e735-4141-a9ee-8ecd83139445"). InnerVolumeSpecName "kube-api-access-b99r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.057072 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr7sk\" (UniqueName: \"kubernetes.io/projected/45a6644f-aa22-41cb-bf2a-4930db050d45-kube-api-access-mr7sk\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.057106 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b99r6\" (UniqueName: \"kubernetes.io/projected/08d5bd12-e735-4141-a9ee-8ecd83139445-kube-api-access-b99r6\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.059336 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-x7r2c" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.059347 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-x7r2c" event={"ID":"45a6644f-aa22-41cb-bf2a-4930db050d45","Type":"ContainerDied","Data":"068ebdcf2dc5c0c5ab8bf417e6a482e573ff174c98a7964c0b6a16b201a1fc86"} Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.059418 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="068ebdcf2dc5c0c5ab8bf417e6a482e573ff174c98a7964c0b6a16b201a1fc86" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.072891 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45a6644f-aa22-41cb-bf2a-4930db050d45" (UID: "45a6644f-aa22-41cb-bf2a-4930db050d45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.073807 4801 generic.go:334] "Generic (PLEG): container finished" podID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerID="f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a" exitCode=0 Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.073930 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.073985 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" event={"ID":"08d5bd12-e735-4141-a9ee-8ecd83139445","Type":"ContainerDied","Data":"f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a"} Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.074044 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jf5b6" event={"ID":"08d5bd12-e735-4141-a9ee-8ecd83139445","Type":"ContainerDied","Data":"deed8041ceff5eff52677fffca30e83c5687630f7692e0bcb6dd0a9f803ced50"} Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.074065 4801 scope.go:117] "RemoveContainer" containerID="f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.146406 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08d5bd12-e735-4141-a9ee-8ecd83139445" (UID: "08d5bd12-e735-4141-a9ee-8ecd83139445"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.153653 4801 scope.go:117] "RemoveContainer" containerID="f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.162047 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.162088 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.165272 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08d5bd12-e735-4141-a9ee-8ecd83139445" (UID: "08d5bd12-e735-4141-a9ee-8ecd83139445"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.174632 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-config" (OuterVolumeSpecName: "config") pod "08d5bd12-e735-4141-a9ee-8ecd83139445" (UID: "08d5bd12-e735-4141-a9ee-8ecd83139445"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.193430 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08d5bd12-e735-4141-a9ee-8ecd83139445" (UID: "08d5bd12-e735-4141-a9ee-8ecd83139445"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.195817 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-config-data" (OuterVolumeSpecName: "config-data") pod "45a6644f-aa22-41cb-bf2a-4930db050d45" (UID: "45a6644f-aa22-41cb-bf2a-4930db050d45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.213195 4801 scope.go:117] "RemoveContainer" containerID="f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a" Nov 24 21:28:47 crc kubenswrapper[4801]: E1124 21:28:47.215783 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a\": container with ID starting with f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a not found: ID does not exist" containerID="f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.215947 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a"} err="failed to get container status \"f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a\": rpc error: code = NotFound desc = could not find container \"f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a\": container with ID starting with f50e877ad092cd0c2c4cf75a0683d7baaa447c89336fe5f67a67b9146594d76a not found: ID does not exist" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.216069 4801 scope.go:117] "RemoveContainer" containerID="f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a" Nov 24 21:28:47 crc kubenswrapper[4801]: E1124 21:28:47.218780 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a\": container with ID starting with f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a not found: ID does not exist" containerID="f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.218937 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a"} err="failed to get container status \"f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a\": rpc error: code = NotFound desc = could not find container \"f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a\": container with ID starting with f8757cbab406509dd0307e22f9455b3c0ce1edcf023614443465d5938816409a not found: ID does not exist" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.220038 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "08d5bd12-e735-4141-a9ee-8ecd83139445" (UID: "08d5bd12-e735-4141-a9ee-8ecd83139445"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.268630 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.268669 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.268690 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.268701 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a6644f-aa22-41cb-bf2a-4930db050d45-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.268714 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08d5bd12-e735-4141-a9ee-8ecd83139445-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.444540 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66f87bb5dd-vfs99"] Nov 24 21:28:47 crc kubenswrapper[4801]: W1124 21:28:47.461335 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod142f97c2_64f1_455a_bd07_ed9d5f9ab466.slice/crio-1b3d08993a2ef9c494d6b1f8a9fedf8ccfe5e90c74a08fd0c314a1ac818b1d10 WatchSource:0}: Error finding container 1b3d08993a2ef9c494d6b1f8a9fedf8ccfe5e90c74a08fd0c314a1ac818b1d10: Status 404 returned error can't find the container with id 1b3d08993a2ef9c494d6b1f8a9fedf8ccfe5e90c74a08fd0c314a1ac818b1d10 Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.606256 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c567b6958-cngc5"] Nov 24 21:28:47 crc kubenswrapper[4801]: W1124 21:28:47.630983 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6a2894c_3c0d_442a_ab62_31748e315cbe.slice/crio-a4245859de34c5e78fdfe7bb740e95185485837ecf557e599458719519acf6c1 WatchSource:0}: Error finding container a4245859de34c5e78fdfe7bb740e95185485837ecf557e599458719519acf6c1: Status 404 returned error can't find the container with id a4245859de34c5e78fdfe7bb740e95185485837ecf557e599458719519acf6c1 Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.718504 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jf5b6"] Nov 24 21:28:47 crc kubenswrapper[4801]: I1124 21:28:47.732334 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jf5b6"] Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.088693 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f87bb5dd-vfs99" event={"ID":"142f97c2-64f1-455a-bd07-ed9d5f9ab466","Type":"ContainerStarted","Data":"ace6a170c9f4347f65c0010548a96789286c84e63d95e2849e8e10fc85ef05e4"} Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.089282 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f87bb5dd-vfs99" event={"ID":"142f97c2-64f1-455a-bd07-ed9d5f9ab466","Type":"ContainerStarted","Data":"1b3d08993a2ef9c494d6b1f8a9fedf8ccfe5e90c74a08fd0c314a1ac818b1d10"} Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.094585 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmchz" event={"ID":"953c737e-024f-41ba-9544-d1238b75519c","Type":"ContainerStarted","Data":"78c100024e34c28f88c08b3b79101e04813f52ddc0319bc3fedfd1e07eb73a30"} Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.097507 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c567b6958-cngc5" event={"ID":"a6a2894c-3c0d-442a-ab62-31748e315cbe","Type":"ContainerStarted","Data":"cd6946c11b0dd08982b76d0673e81b466f3ee6792ce5c821f1cb60d2d66e39d6"} Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.097537 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c567b6958-cngc5" event={"ID":"a6a2894c-3c0d-442a-ab62-31748e315cbe","Type":"ContainerStarted","Data":"a4245859de34c5e78fdfe7bb740e95185485837ecf557e599458719519acf6c1"} Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.098054 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.119483 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kmchz" podStartSLOduration=3.448339551 podStartE2EDuration="45.119464369s" podCreationTimestamp="2025-11-24 21:28:03 +0000 UTC" firstStartedPulling="2025-11-24 21:28:05.895900751 +0000 UTC m=+1257.978487421" lastFinishedPulling="2025-11-24 21:28:47.567025559 +0000 UTC m=+1299.649612239" observedRunningTime="2025-11-24 21:28:48.114034195 +0000 UTC m=+1300.196620865" watchObservedRunningTime="2025-11-24 21:28:48.119464369 +0000 UTC m=+1300.202051049" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.154210 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6c567b6958-cngc5" podStartSLOduration=2.154180405 podStartE2EDuration="2.154180405s" podCreationTimestamp="2025-11-24 21:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:48.139532293 +0000 UTC m=+1300.222118963" watchObservedRunningTime="2025-11-24 21:28:48.154180405 +0000 UTC m=+1300.236767075" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.224585 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.227904 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.244987 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.251861 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:28:48 crc kubenswrapper[4801]: I1124 21:28:48.682690 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" path="/var/lib/kubelet/pods/08d5bd12-e735-4141-a9ee-8ecd83139445/volumes" Nov 24 21:28:49 crc kubenswrapper[4801]: I1124 21:28:49.134839 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f87bb5dd-vfs99" event={"ID":"142f97c2-64f1-455a-bd07-ed9d5f9ab466","Type":"ContainerStarted","Data":"dadaae678d9429d9be85f532a9d49c571feccea642ce9beaef9337e7f3418306"} Nov 24 21:28:49 crc kubenswrapper[4801]: I1124 21:28:49.160039 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66f87bb5dd-vfs99" podStartSLOduration=4.160018109 podStartE2EDuration="4.160018109s" podCreationTimestamp="2025-11-24 21:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:28:49.154695409 +0000 UTC m=+1301.237282079" watchObservedRunningTime="2025-11-24 21:28:49.160018109 +0000 UTC m=+1301.242604779" Nov 24 21:28:50 crc kubenswrapper[4801]: I1124 21:28:50.149907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ccwmw" event={"ID":"3f3b859c-0916-4b01-a41f-0b9fd4d8b204","Type":"ContainerStarted","Data":"1e8b23c4d2a53f58360dc0d64cdb399e8cebe35e4cd2e2288bcc5e3042130974"} Nov 24 21:28:50 crc kubenswrapper[4801]: I1124 21:28:50.150812 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:50 crc kubenswrapper[4801]: I1124 21:28:50.150837 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:28:50 crc kubenswrapper[4801]: I1124 21:28:50.175563 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ccwmw" podStartSLOduration=5.107671048 podStartE2EDuration="48.175539405s" podCreationTimestamp="2025-11-24 21:28:02 +0000 UTC" firstStartedPulling="2025-11-24 21:28:05.084549364 +0000 UTC m=+1257.167136034" lastFinishedPulling="2025-11-24 21:28:48.152417721 +0000 UTC m=+1300.235004391" observedRunningTime="2025-11-24 21:28:50.168931057 +0000 UTC m=+1302.251517727" watchObservedRunningTime="2025-11-24 21:28:50.175539405 +0000 UTC m=+1302.258126075" Nov 24 21:28:53 crc kubenswrapper[4801]: I1124 21:28:53.197170 4801 generic.go:334] "Generic (PLEG): container finished" podID="953c737e-024f-41ba-9544-d1238b75519c" containerID="78c100024e34c28f88c08b3b79101e04813f52ddc0319bc3fedfd1e07eb73a30" exitCode=0 Nov 24 21:28:53 crc kubenswrapper[4801]: I1124 21:28:53.197316 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmchz" event={"ID":"953c737e-024f-41ba-9544-d1238b75519c","Type":"ContainerDied","Data":"78c100024e34c28f88c08b3b79101e04813f52ddc0319bc3fedfd1e07eb73a30"} Nov 24 21:28:54 crc kubenswrapper[4801]: I1124 21:28:54.320496 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:28:54 crc kubenswrapper[4801]: I1124 21:28:54.320586 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:28:55 crc kubenswrapper[4801]: I1124 21:28:55.948218 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.036124 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mznbj\" (UniqueName: \"kubernetes.io/projected/953c737e-024f-41ba-9544-d1238b75519c-kube-api-access-mznbj\") pod \"953c737e-024f-41ba-9544-d1238b75519c\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.036314 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-db-sync-config-data\") pod \"953c737e-024f-41ba-9544-d1238b75519c\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.036765 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-combined-ca-bundle\") pod \"953c737e-024f-41ba-9544-d1238b75519c\" (UID: \"953c737e-024f-41ba-9544-d1238b75519c\") " Nov 24 21:28:56 crc kubenswrapper[4801]: E1124 21:28:56.040176 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.043480 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "953c737e-024f-41ba-9544-d1238b75519c" (UID: "953c737e-024f-41ba-9544-d1238b75519c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.045173 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953c737e-024f-41ba-9544-d1238b75519c-kube-api-access-mznbj" (OuterVolumeSpecName: "kube-api-access-mznbj") pod "953c737e-024f-41ba-9544-d1238b75519c" (UID: "953c737e-024f-41ba-9544-d1238b75519c"). InnerVolumeSpecName "kube-api-access-mznbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.071627 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "953c737e-024f-41ba-9544-d1238b75519c" (UID: "953c737e-024f-41ba-9544-d1238b75519c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.140813 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.140860 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mznbj\" (UniqueName: \"kubernetes.io/projected/953c737e-024f-41ba-9544-d1238b75519c-kube-api-access-mznbj\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.140873 4801 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/953c737e-024f-41ba-9544-d1238b75519c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.253595 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kmchz" event={"ID":"953c737e-024f-41ba-9544-d1238b75519c","Type":"ContainerDied","Data":"79cf500050a4b5bfe9bf453dd6f7d4ba65134f1976a17d63d9598b91473d003b"} Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.253631 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kmchz" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.253645 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79cf500050a4b5bfe9bf453dd6f7d4ba65134f1976a17d63d9598b91473d003b" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.256275 4801 generic.go:334] "Generic (PLEG): container finished" podID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" containerID="1e8b23c4d2a53f58360dc0d64cdb399e8cebe35e4cd2e2288bcc5e3042130974" exitCode=0 Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.256423 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ccwmw" event={"ID":"3f3b859c-0916-4b01-a41f-0b9fd4d8b204","Type":"ContainerDied","Data":"1e8b23c4d2a53f58360dc0d64cdb399e8cebe35e4cd2e2288bcc5e3042130974"} Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.260838 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerStarted","Data":"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d"} Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.261100 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.261106 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="proxy-httpd" containerID="cri-o://35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" gracePeriod=30 Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.261049 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="ceilometer-notification-agent" containerID="cri-o://e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" gracePeriod=30 Nov 24 21:28:56 crc kubenswrapper[4801]: I1124 21:28:56.261116 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="sg-core" containerID="cri-o://3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" gracePeriod=30 Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.343713 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8664997b87-rmz64"] Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.353115 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c737e-024f-41ba-9544-d1238b75519c" containerName="barbican-db-sync" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.353172 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c737e-024f-41ba-9544-d1238b75519c" containerName="barbican-db-sync" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.353199 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerName="dnsmasq-dns" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.353207 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerName="dnsmasq-dns" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.353235 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6644f-aa22-41cb-bf2a-4930db050d45" containerName="heat-db-sync" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.353242 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6644f-aa22-41cb-bf2a-4930db050d45" containerName="heat-db-sync" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.353282 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerName="init" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.353415 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerName="init" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.355914 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d5bd12-e735-4141-a9ee-8ecd83139445" containerName="dnsmasq-dns" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.355983 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c737e-024f-41ba-9544-d1238b75519c" containerName="barbican-db-sync" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.356002 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a6644f-aa22-41cb-bf2a-4930db050d45" containerName="heat-db-sync" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.358456 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.402053 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8664997b87-rmz64"] Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.402186 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.406231 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.412875 4801 generic.go:334] "Generic (PLEG): container finished" podID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerID="35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" exitCode=0 Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.412960 4801 generic.go:334] "Generic (PLEG): container finished" podID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerID="3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" exitCode=2 Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.412969 4801 generic.go:334] "Generic (PLEG): container finished" podID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerID="e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" exitCode=0 Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.413353 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerDied","Data":"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d"} Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.413417 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerDied","Data":"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7"} Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.413431 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerDied","Data":"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc"} Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.413440 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9916e08d-472d-47e4-a0ea-d8b67bb8faee","Type":"ContainerDied","Data":"359485d66e82f6c71d8d6aeb87e1a51f06680cc1e9bf51372ee8e430060a2b32"} Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.413459 4801 scope.go:117] "RemoveContainer" containerID="35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.415596 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gndc7" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.426528 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.429452 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-666f9c9746-pfrcd"] Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.441045 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="sg-core" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.441089 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="sg-core" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.441104 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="ceilometer-notification-agent" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.441114 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="ceilometer-notification-agent" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.441138 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="proxy-httpd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.441147 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="proxy-httpd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.441558 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="proxy-httpd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.441572 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="ceilometer-notification-agent" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.441589 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" containerName="sg-core" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.445041 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.450287 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.470898 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-666f9c9746-pfrcd"] Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487132 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-config-data\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487266 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-log-httpd\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487332 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvdhl\" (UniqueName: \"kubernetes.io/projected/9916e08d-472d-47e4-a0ea-d8b67bb8faee-kube-api-access-vvdhl\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487395 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-combined-ca-bundle\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487446 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-run-httpd\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487556 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-scripts\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.487761 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-sg-core-conf-yaml\") pod \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\" (UID: \"9916e08d-472d-47e4-a0ea-d8b67bb8faee\") " Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.488166 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-combined-ca-bundle\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.488253 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-config-data-custom\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.488279 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-config-data\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.488516 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjtp\" (UniqueName: \"kubernetes.io/projected/7acdae11-3fc7-4466-987f-5c7360c128c6-kube-api-access-jfjtp\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.488552 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acdae11-3fc7-4466-987f-5c7360c128c6-logs\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.495210 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.495589 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.516162 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-scripts" (OuterVolumeSpecName: "scripts") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.516585 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9916e08d-472d-47e4-a0ea-d8b67bb8faee-kube-api-access-vvdhl" (OuterVolumeSpecName: "kube-api-access-vvdhl") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "kube-api-access-vvdhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.524017 4801 scope.go:117] "RemoveContainer" containerID="3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.540012 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fb9ld"] Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.542862 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.577126 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592316 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-svc\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592427 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/920a12a9-4e3f-4425-824d-1cbb1c686f99-logs\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592455 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-combined-ca-bundle\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-config-data\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592570 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjtp\" (UniqueName: \"kubernetes.io/projected/7acdae11-3fc7-4466-987f-5c7360c128c6-kube-api-access-jfjtp\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592600 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acdae11-3fc7-4466-987f-5c7360c128c6-logs\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592624 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-config-data-custom\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-combined-ca-bundle\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592679 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592711 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7qq\" (UniqueName: \"kubernetes.io/projected/920a12a9-4e3f-4425-824d-1cbb1c686f99-kube-api-access-hh7qq\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592737 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8tjq\" (UniqueName: \"kubernetes.io/projected/2a7f7ccc-38b5-4166-8042-c7126ded8faf-kube-api-access-l8tjq\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592775 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-config-data\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-config-data-custom\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592850 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592873 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592910 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-config\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592980 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.592997 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.593010 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvdhl\" (UniqueName: \"kubernetes.io/projected/9916e08d-472d-47e4-a0ea-d8b67bb8faee-kube-api-access-vvdhl\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.593023 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9916e08d-472d-47e4-a0ea-d8b67bb8faee-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.593036 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.593774 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acdae11-3fc7-4466-987f-5c7360c128c6-logs\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.605728 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-combined-ca-bundle\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.607299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-config-data\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.611007 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acdae11-3fc7-4466-987f-5c7360c128c6-config-data-custom\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.612113 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fb9ld"] Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.613957 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjtp\" (UniqueName: \"kubernetes.io/projected/7acdae11-3fc7-4466-987f-5c7360c128c6-kube-api-access-jfjtp\") pod \"barbican-worker-8664997b87-rmz64\" (UID: \"7acdae11-3fc7-4466-987f-5c7360c128c6\") " pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.615772 4801 scope.go:117] "RemoveContainer" containerID="e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.643750 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8dd9d95c8-wjfkg"] Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.650040 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.654065 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.664932 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.676557 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8dd9d95c8-wjfkg"] Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697525 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7qq\" (UniqueName: \"kubernetes.io/projected/920a12a9-4e3f-4425-824d-1cbb1c686f99-kube-api-access-hh7qq\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697605 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8tjq\" (UniqueName: \"kubernetes.io/projected/2a7f7ccc-38b5-4166-8042-c7126ded8faf-kube-api-access-l8tjq\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697762 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697792 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-config\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697844 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-svc\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697916 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/920a12a9-4e3f-4425-824d-1cbb1c686f99-logs\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.697942 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-combined-ca-bundle\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.698004 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-config-data\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.698090 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-config-data-custom\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.698148 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.698234 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.700007 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-svc\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.700671 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.700963 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.701293 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-config\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.701671 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/920a12a9-4e3f-4425-824d-1cbb1c686f99-logs\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.702300 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-combined-ca-bundle\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.702472 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-config-data" (OuterVolumeSpecName: "config-data") pod "9916e08d-472d-47e4-a0ea-d8b67bb8faee" (UID: "9916e08d-472d-47e4-a0ea-d8b67bb8faee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.702650 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.703077 4801 scope.go:117] "RemoveContainer" containerID="35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.704726 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": container with ID starting with 35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d not found: ID does not exist" containerID="35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.704762 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d"} err="failed to get container status \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": rpc error: code = NotFound desc = could not find container \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": container with ID starting with 35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.704789 4801 scope.go:117] "RemoveContainer" containerID="3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.705214 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": container with ID starting with 3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7 not found: ID does not exist" containerID="3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.705243 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7"} err="failed to get container status \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": rpc error: code = NotFound desc = could not find container \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": container with ID starting with 3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7 not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.705262 4801 scope.go:117] "RemoveContainer" containerID="e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" Nov 24 21:28:57 crc kubenswrapper[4801]: E1124 21:28:57.706800 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": container with ID starting with e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc not found: ID does not exist" containerID="e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.706826 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc"} err="failed to get container status \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": rpc error: code = NotFound desc = could not find container \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": container with ID starting with e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.706841 4801 scope.go:117] "RemoveContainer" containerID="35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.707484 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d"} err="failed to get container status \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": rpc error: code = NotFound desc = could not find container \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": container with ID starting with 35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.707502 4801 scope.go:117] "RemoveContainer" containerID="3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.707854 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7"} err="failed to get container status \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": rpc error: code = NotFound desc = could not find container \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": container with ID starting with 3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7 not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.707875 4801 scope.go:117] "RemoveContainer" containerID="e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.708088 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc"} err="failed to get container status \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": rpc error: code = NotFound desc = could not find container \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": container with ID starting with e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.708109 4801 scope.go:117] "RemoveContainer" containerID="35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.708628 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d"} err="failed to get container status \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": rpc error: code = NotFound desc = could not find container \"35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d\": container with ID starting with 35e3212a3ced60574d0d4cee4e7475769e0829641fd3ebf3e446686164dfd13d not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.708652 4801 scope.go:117] "RemoveContainer" containerID="3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.708939 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7"} err="failed to get container status \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": rpc error: code = NotFound desc = could not find container \"3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7\": container with ID starting with 3c1876c23e44aae9bb94f756be4ee070f43bb98d10f9cc6bd0356a06deb80cd7 not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.708961 4801 scope.go:117] "RemoveContainer" containerID="e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.710893 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc"} err="failed to get container status \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": rpc error: code = NotFound desc = could not find container \"e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc\": container with ID starting with e540c2455a4a02e38a70bfe89ffa6973ef7ac627c48bf52b97d325db9f0e74dc not found: ID does not exist" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.712844 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-config-data-custom\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.717117 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920a12a9-4e3f-4425-824d-1cbb1c686f99-config-data\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.719008 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8tjq\" (UniqueName: \"kubernetes.io/projected/2a7f7ccc-38b5-4166-8042-c7126ded8faf-kube-api-access-l8tjq\") pod \"dnsmasq-dns-85ff748b95-fb9ld\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.720522 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7qq\" (UniqueName: \"kubernetes.io/projected/920a12a9-4e3f-4425-824d-1cbb1c686f99-kube-api-access-hh7qq\") pod \"barbican-keystone-listener-666f9c9746-pfrcd\" (UID: \"920a12a9-4e3f-4425-824d-1cbb1c686f99\") " pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.765301 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8664997b87-rmz64" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.797934 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.802805 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data-custom\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.803064 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnbq9\" (UniqueName: \"kubernetes.io/projected/490f770c-166c-4564-b01b-ce7ccea0356d-kube-api-access-wnbq9\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.803087 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/490f770c-166c-4564-b01b-ce7ccea0356d-logs\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.803139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-combined-ca-bundle\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.803169 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.803295 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9916e08d-472d-47e4-a0ea-d8b67bb8faee-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.909717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnbq9\" (UniqueName: \"kubernetes.io/projected/490f770c-166c-4564-b01b-ce7ccea0356d-kube-api-access-wnbq9\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.910030 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/490f770c-166c-4564-b01b-ce7ccea0356d-logs\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.910077 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-combined-ca-bundle\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.910093 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.910191 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data-custom\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.911059 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/490f770c-166c-4564-b01b-ce7ccea0356d-logs\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.917035 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data-custom\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.921015 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-combined-ca-bundle\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.921939 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.935927 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:57 crc kubenswrapper[4801]: I1124 21:28:57.937169 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnbq9\" (UniqueName: \"kubernetes.io/projected/490f770c-166c-4564-b01b-ce7ccea0356d-kube-api-access-wnbq9\") pod \"barbican-api-8dd9d95c8-wjfkg\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.095793 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.127199 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.233515 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wjl\" (UniqueName: \"kubernetes.io/projected/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-kube-api-access-p7wjl\") pod \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.233748 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-db-sync-config-data\") pod \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.233807 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-config-data\") pod \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.233904 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-scripts\") pod \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.233997 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-combined-ca-bundle\") pod \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.234034 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-etc-machine-id\") pod \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\" (UID: \"3f3b859c-0916-4b01-a41f-0b9fd4d8b204\") " Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.234860 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f3b859c-0916-4b01-a41f-0b9fd4d8b204" (UID: "3f3b859c-0916-4b01-a41f-0b9fd4d8b204"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.247686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-kube-api-access-p7wjl" (OuterVolumeSpecName: "kube-api-access-p7wjl") pod "3f3b859c-0916-4b01-a41f-0b9fd4d8b204" (UID: "3f3b859c-0916-4b01-a41f-0b9fd4d8b204"). InnerVolumeSpecName "kube-api-access-p7wjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.248619 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-scripts" (OuterVolumeSpecName: "scripts") pod "3f3b859c-0916-4b01-a41f-0b9fd4d8b204" (UID: "3f3b859c-0916-4b01-a41f-0b9fd4d8b204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.253532 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3f3b859c-0916-4b01-a41f-0b9fd4d8b204" (UID: "3f3b859c-0916-4b01-a41f-0b9fd4d8b204"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.291732 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f3b859c-0916-4b01-a41f-0b9fd4d8b204" (UID: "3f3b859c-0916-4b01-a41f-0b9fd4d8b204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.347590 4801 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.347625 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.347637 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.347646 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.347656 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7wjl\" (UniqueName: \"kubernetes.io/projected/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-kube-api-access-p7wjl\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.378833 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-config-data" (OuterVolumeSpecName: "config-data") pod "3f3b859c-0916-4b01-a41f-0b9fd4d8b204" (UID: "3f3b859c-0916-4b01-a41f-0b9fd4d8b204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.437890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ccwmw" event={"ID":"3f3b859c-0916-4b01-a41f-0b9fd4d8b204","Type":"ContainerDied","Data":"c41b5ed3100ab3ffbe71e6b930dc453accc260fa29d3736dd8d09149551e931b"} Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.437943 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c41b5ed3100ab3ffbe71e6b930dc453accc260fa29d3736dd8d09149551e931b" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.437996 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ccwmw" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.449815 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.452168 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3b859c-0916-4b01-a41f-0b9fd4d8b204-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.513448 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8664997b87-rmz64"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.587982 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.650226 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.820060 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9916e08d-472d-47e4-a0ea-d8b67bb8faee" path="/var/lib/kubelet/pods/9916e08d-472d-47e4-a0ea-d8b67bb8faee/volumes" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.821710 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:58 crc kubenswrapper[4801]: E1124 21:28:58.822731 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" containerName="cinder-db-sync" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.822750 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" containerName="cinder-db-sync" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.823216 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" containerName="cinder-db-sync" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.855595 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.860240 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.881237 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.882358 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.882824 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-666f9c9746-pfrcd"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.914168 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.922120 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.928532 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.928758 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.928986 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.929173 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4bhzv" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.990185 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999029 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999108 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999137 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ebf440f-ae40-48c8-926b-89a0aa42d583-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999159 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999249 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-config-data\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999277 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-scripts\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999300 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999325 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ps4\" (UniqueName: \"kubernetes.io/projected/5e36110e-4c85-489f-bcec-305b72945ad0-kube-api-access-v5ps4\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999347 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999377 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-log-httpd\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999417 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999444 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkll\" (UniqueName: \"kubernetes.io/projected/4ebf440f-ae40-48c8-926b-89a0aa42d583-kube-api-access-gdkll\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:58 crc kubenswrapper[4801]: I1124 21:28:58.999463 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-run-httpd\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.065457 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fb9ld"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.103921 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-config-data\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.104657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-scripts\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.104744 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.104823 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ps4\" (UniqueName: \"kubernetes.io/projected/5e36110e-4c85-489f-bcec-305b72945ad0-kube-api-access-v5ps4\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.104895 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.104920 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-log-httpd\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105063 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkll\" (UniqueName: \"kubernetes.io/projected/4ebf440f-ae40-48c8-926b-89a0aa42d583-kube-api-access-gdkll\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-run-httpd\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105523 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105638 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105708 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ebf440f-ae40-48c8-926b-89a0aa42d583-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.105981 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.111741 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.111787 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ebf440f-ae40-48c8-926b-89a0aa42d583-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.112673 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-log-httpd\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.113878 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-run-httpd\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.120863 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.120961 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-config-data\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.130950 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.131006 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-scripts\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.134319 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.134678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.136878 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.140077 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ps4\" (UniqueName: \"kubernetes.io/projected/5e36110e-4c85-489f-bcec-305b72945ad0-kube-api-access-v5ps4\") pod \"ceilometer-0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.145691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkll\" (UniqueName: \"kubernetes.io/projected/4ebf440f-ae40-48c8-926b-89a0aa42d583-kube-api-access-gdkll\") pod \"cinder-scheduler-0\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.148435 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-74j6j"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.161842 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-74j6j"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.161987 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.172117 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fb9ld"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.208591 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.209695 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.209892 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.209972 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvh4\" (UniqueName: \"kubernetes.io/projected/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-kube-api-access-skvh4\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.210325 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-config\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.211454 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.242332 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.244872 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.247361 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.250342 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.256707 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.275786 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8dd9d95c8-wjfkg"] Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.314671 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.314745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0688ae99-2d69-43fc-a729-93f725fa31e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.314812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.314853 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.314891 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvh4\" (UniqueName: \"kubernetes.io/projected/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-kube-api-access-skvh4\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315021 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-config\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315059 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315085 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315114 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526dl\" (UniqueName: \"kubernetes.io/projected/0688ae99-2d69-43fc-a729-93f725fa31e6-kube-api-access-526dl\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315211 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0688ae99-2d69-43fc-a729-93f725fa31e6-logs\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315394 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-scripts\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.315437 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.316666 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.317351 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.317798 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.318002 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-config\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.319159 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.351050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvh4\" (UniqueName: \"kubernetes.io/projected/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-kube-api-access-skvh4\") pod \"dnsmasq-dns-5c9776ccc5-74j6j\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.434589 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.435401 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.436222 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.436279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-526dl\" (UniqueName: \"kubernetes.io/projected/0688ae99-2d69-43fc-a729-93f725fa31e6-kube-api-access-526dl\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.437226 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0688ae99-2d69-43fc-a729-93f725fa31e6-logs\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.437526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-scripts\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.437777 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0688ae99-2d69-43fc-a729-93f725fa31e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.437871 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0688ae99-2d69-43fc-a729-93f725fa31e6-logs\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.437991 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.438345 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0688ae99-2d69-43fc-a729-93f725fa31e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.443297 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.453328 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-scripts\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.453546 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.454098 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.458133 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-526dl\" (UniqueName: \"kubernetes.io/projected/0688ae99-2d69-43fc-a729-93f725fa31e6-kube-api-access-526dl\") pod \"cinder-api-0\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " pod="openstack/cinder-api-0" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.475890 4801 generic.go:334] "Generic (PLEG): container finished" podID="2a7f7ccc-38b5-4166-8042-c7126ded8faf" containerID="dcddeb55a2bcd2ee73fee288c59e5ee97c929e6a1473f18e4f90973c6afc58d9" exitCode=0 Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.475982 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" event={"ID":"2a7f7ccc-38b5-4166-8042-c7126ded8faf","Type":"ContainerDied","Data":"dcddeb55a2bcd2ee73fee288c59e5ee97c929e6a1473f18e4f90973c6afc58d9"} Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.476019 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" event={"ID":"2a7f7ccc-38b5-4166-8042-c7126ded8faf","Type":"ContainerStarted","Data":"129230ddcf3af1f5dd2da9f54d59bbc6d3e26ec1438c8d4fb4109dc47fa29447"} Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.478696 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" event={"ID":"920a12a9-4e3f-4425-824d-1cbb1c686f99","Type":"ContainerStarted","Data":"3d28a460872aa871aeee33642f75256e2d5f5c4ffece936df279809f924845d0"} Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.481142 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8dd9d95c8-wjfkg" event={"ID":"490f770c-166c-4564-b01b-ce7ccea0356d","Type":"ContainerStarted","Data":"a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315"} Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.481225 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8dd9d95c8-wjfkg" event={"ID":"490f770c-166c-4564-b01b-ce7ccea0356d","Type":"ContainerStarted","Data":"94c4f0e19b736ef9fb95e6bea1ffeaa96a9360a342900b16a6407d53fd9067da"} Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.485018 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8664997b87-rmz64" event={"ID":"7acdae11-3fc7-4466-987f-5c7360c128c6","Type":"ContainerStarted","Data":"ffdea3890434c573beef390bc23ea57fd34c2fc36b25f4ccf45512be008fd586"} Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.497429 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:28:59 crc kubenswrapper[4801]: I1124 21:28:59.581326 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:29:00 crc kubenswrapper[4801]: W1124 21:29:00.112952 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e36110e_4c85_489f_bcec_305b72945ad0.slice/crio-7c947ad72a8d7f24dc122cf613e6c56e3e4e50b21de64956b5951e1b047cab0b WatchSource:0}: Error finding container 7c947ad72a8d7f24dc122cf613e6c56e3e4e50b21de64956b5951e1b047cab0b: Status 404 returned error can't find the container with id 7c947ad72a8d7f24dc122cf613e6c56e3e4e50b21de64956b5951e1b047cab0b Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.149089 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.333232 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.452935 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.517700 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8dd9d95c8-wjfkg" event={"ID":"490f770c-166c-4564-b01b-ce7ccea0356d","Type":"ContainerStarted","Data":"044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79"} Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.518406 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.518500 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.524269 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" event={"ID":"2a7f7ccc-38b5-4166-8042-c7126ded8faf","Type":"ContainerDied","Data":"129230ddcf3af1f5dd2da9f54d59bbc6d3e26ec1438c8d4fb4109dc47fa29447"} Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.524342 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fb9ld" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.524351 4801 scope.go:117] "RemoveContainer" containerID="dcddeb55a2bcd2ee73fee288c59e5ee97c929e6a1473f18e4f90973c6afc58d9" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.531773 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerStarted","Data":"7c947ad72a8d7f24dc122cf613e6c56e3e4e50b21de64956b5951e1b047cab0b"} Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.556495 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ebf440f-ae40-48c8-926b-89a0aa42d583","Type":"ContainerStarted","Data":"ce08c9ec54b5a1907093f8658c5a8ebf7393c63cca6a15659b3cba2e86b644a5"} Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.562747 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8dd9d95c8-wjfkg" podStartSLOduration=3.562721396 podStartE2EDuration="3.562721396s" podCreationTimestamp="2025-11-24 21:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:00.552375115 +0000 UTC m=+1312.634961795" watchObservedRunningTime="2025-11-24 21:29:00.562721396 +0000 UTC m=+1312.645308056" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.582176 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-config\") pod \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.582402 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-swift-storage-0\") pod \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.582478 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8tjq\" (UniqueName: \"kubernetes.io/projected/2a7f7ccc-38b5-4166-8042-c7126ded8faf-kube-api-access-l8tjq\") pod \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.582629 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-sb\") pod \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.582727 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-svc\") pod \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.583037 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-nb\") pod \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\" (UID: \"2a7f7ccc-38b5-4166-8042-c7126ded8faf\") " Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.593893 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7f7ccc-38b5-4166-8042-c7126ded8faf-kube-api-access-l8tjq" (OuterVolumeSpecName: "kube-api-access-l8tjq") pod "2a7f7ccc-38b5-4166-8042-c7126ded8faf" (UID: "2a7f7ccc-38b5-4166-8042-c7126ded8faf"). InnerVolumeSpecName "kube-api-access-l8tjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.628834 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2a7f7ccc-38b5-4166-8042-c7126ded8faf" (UID: "2a7f7ccc-38b5-4166-8042-c7126ded8faf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.634432 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-74j6j"] Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.641832 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-config" (OuterVolumeSpecName: "config") pod "2a7f7ccc-38b5-4166-8042-c7126ded8faf" (UID: "2a7f7ccc-38b5-4166-8042-c7126ded8faf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.646606 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a7f7ccc-38b5-4166-8042-c7126ded8faf" (UID: "2a7f7ccc-38b5-4166-8042-c7126ded8faf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.650240 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a7f7ccc-38b5-4166-8042-c7126ded8faf" (UID: "2a7f7ccc-38b5-4166-8042-c7126ded8faf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.656484 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.660349 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a7f7ccc-38b5-4166-8042-c7126ded8faf" (UID: "2a7f7ccc-38b5-4166-8042-c7126ded8faf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.699804 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.699865 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.699877 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.699888 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8tjq\" (UniqueName: \"kubernetes.io/projected/2a7f7ccc-38b5-4166-8042-c7126ded8faf-kube-api-access-l8tjq\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.699899 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.699908 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a7f7ccc-38b5-4166-8042-c7126ded8faf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.887994 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fb9ld"] Nov 24 21:29:00 crc kubenswrapper[4801]: I1124 21:29:00.908283 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fb9ld"] Nov 24 21:29:01 crc kubenswrapper[4801]: I1124 21:29:01.508144 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:01 crc kubenswrapper[4801]: I1124 21:29:01.572219 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0688ae99-2d69-43fc-a729-93f725fa31e6","Type":"ContainerStarted","Data":"4653d15376ee33d5198432b9f042b1e764ab96a4dba0a739cd8437d1e1cb2225"} Nov 24 21:29:01 crc kubenswrapper[4801]: I1124 21:29:01.572592 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0688ae99-2d69-43fc-a729-93f725fa31e6","Type":"ContainerStarted","Data":"dc1c1a2796aa6becd1555e9f558f2470cba2e6ce38ceae7cbe0d1651f5199af9"} Nov 24 21:29:01 crc kubenswrapper[4801]: I1124 21:29:01.577866 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" event={"ID":"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed","Type":"ContainerStarted","Data":"426b1c6cb6efacc5c62137e944d861c2d0fd6fbade02d238ed3457ab78d9eced"} Nov 24 21:29:02 crc kubenswrapper[4801]: I1124 21:29:02.595931 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" event={"ID":"920a12a9-4e3f-4425-824d-1cbb1c686f99","Type":"ContainerStarted","Data":"e5de7191cfaa7cc377197cb86e9c0c936d809928dec85a02e536e777d9717826"} Nov 24 21:29:02 crc kubenswrapper[4801]: I1124 21:29:02.599085 4801 generic.go:334] "Generic (PLEG): container finished" podID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerID="3a42bf0730670b80a21f950f2b3de25fd72e7557049b93345609115e13c26067" exitCode=0 Nov 24 21:29:02 crc kubenswrapper[4801]: I1124 21:29:02.599248 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" event={"ID":"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed","Type":"ContainerDied","Data":"3a42bf0730670b80a21f950f2b3de25fd72e7557049b93345609115e13c26067"} Nov 24 21:29:02 crc kubenswrapper[4801]: I1124 21:29:02.610802 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerStarted","Data":"c17f09dca0e1313f40eb8e5fe1d034aed4d75502d0f09e9173248286ae9227b8"} Nov 24 21:29:02 crc kubenswrapper[4801]: I1124 21:29:02.618684 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8664997b87-rmz64" event={"ID":"7acdae11-3fc7-4466-987f-5c7360c128c6","Type":"ContainerStarted","Data":"389b024940cdfb666b9fcc4662468713149bba51e98e2074fb5798962ff2850a"} Nov 24 21:29:02 crc kubenswrapper[4801]: I1124 21:29:02.773130 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a7f7ccc-38b5-4166-8042-c7126ded8faf" path="/var/lib/kubelet/pods/2a7f7ccc-38b5-4166-8042-c7126ded8faf/volumes" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.712751 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" event={"ID":"920a12a9-4e3f-4425-824d-1cbb1c686f99","Type":"ContainerStarted","Data":"963a3f84074731b9771725d8a3f1a07e1412d0e7157296a00fe505ff182c923f"} Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.746754 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" event={"ID":"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed","Type":"ContainerStarted","Data":"92b8522dd835d96af406b9970e2045a6f90da7d8baa632db28272d13a2b7c9a7"} Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.747087 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.755155 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerStarted","Data":"8aae75a4d84e2edf739d14bd43080bf2d9844201d023946ff3e2e703a396dffd"} Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.772454 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ebf440f-ae40-48c8-926b-89a0aa42d583","Type":"ContainerStarted","Data":"cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e"} Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.786778 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8664997b87-rmz64" event={"ID":"7acdae11-3fc7-4466-987f-5c7360c128c6","Type":"ContainerStarted","Data":"92f136a5e8393493cbd9ecb6c0c17a62b4446db669ed1e7a103b2b09b099e5ca"} Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.796828 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-666f9c9746-pfrcd" podStartSLOduration=3.622647233 podStartE2EDuration="6.796796173s" podCreationTimestamp="2025-11-24 21:28:57 +0000 UTC" firstStartedPulling="2025-11-24 21:28:58.786210531 +0000 UTC m=+1310.868797191" lastFinishedPulling="2025-11-24 21:29:01.960359461 +0000 UTC m=+1314.042946131" observedRunningTime="2025-11-24 21:29:03.745887109 +0000 UTC m=+1315.828473779" watchObservedRunningTime="2025-11-24 21:29:03.796796173 +0000 UTC m=+1315.879382843" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.799437 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" podStartSLOduration=5.799430232 podStartE2EDuration="5.799430232s" podCreationTimestamp="2025-11-24 21:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:03.771670326 +0000 UTC m=+1315.854256986" watchObservedRunningTime="2025-11-24 21:29:03.799430232 +0000 UTC m=+1315.882016902" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.835654 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0688ae99-2d69-43fc-a729-93f725fa31e6","Type":"ContainerStarted","Data":"c6266119f517e8d620284b8218d842f1eab44bc9debb2c6bb61b91e88afc9fb2"} Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.835999 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.836701 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api" containerID="cri-o://c6266119f517e8d620284b8218d842f1eab44bc9debb2c6bb61b91e88afc9fb2" gracePeriod=30 Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.836974 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api-log" containerID="cri-o://4653d15376ee33d5198432b9f042b1e764ab96a4dba0a739cd8437d1e1cb2225" gracePeriod=30 Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.844756 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fc9f9c96b-sqlnv"] Nov 24 21:29:03 crc kubenswrapper[4801]: E1124 21:29:03.845506 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7f7ccc-38b5-4166-8042-c7126ded8faf" containerName="init" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.845524 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7f7ccc-38b5-4166-8042-c7126ded8faf" containerName="init" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.845785 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7f7ccc-38b5-4166-8042-c7126ded8faf" containerName="init" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.847451 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.856833 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.857217 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.860846 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8664997b87-rmz64" podStartSLOduration=3.416221944 podStartE2EDuration="6.860817671s" podCreationTimestamp="2025-11-24 21:28:57 +0000 UTC" firstStartedPulling="2025-11-24 21:28:58.521885589 +0000 UTC m=+1310.604472259" lastFinishedPulling="2025-11-24 21:29:01.966481316 +0000 UTC m=+1314.049067986" observedRunningTime="2025-11-24 21:29:03.817709523 +0000 UTC m=+1315.900296203" watchObservedRunningTime="2025-11-24 21:29:03.860817671 +0000 UTC m=+1315.943404331" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.932015 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf3533a-d21d-43fc-9d81-1cc5aa48893a-logs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.932115 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-config-data\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.932215 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-config-data-custom\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.932239 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-combined-ca-bundle\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.932424 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-internal-tls-certs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.934248 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2gh\" (UniqueName: \"kubernetes.io/projected/baf3533a-d21d-43fc-9d81-1cc5aa48893a-kube-api-access-9d2gh\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.934482 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-public-tls-certs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.978286 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fc9f9c96b-sqlnv"] Nov 24 21:29:03 crc kubenswrapper[4801]: I1124 21:29:03.989839 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.989816476 podStartE2EDuration="5.989816476s" podCreationTimestamp="2025-11-24 21:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:03.865744269 +0000 UTC m=+1315.948330959" watchObservedRunningTime="2025-11-24 21:29:03.989816476 +0000 UTC m=+1316.072403146" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.048555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-internal-tls-certs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.048787 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2gh\" (UniqueName: \"kubernetes.io/projected/baf3533a-d21d-43fc-9d81-1cc5aa48893a-kube-api-access-9d2gh\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.048901 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-public-tls-certs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.049088 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf3533a-d21d-43fc-9d81-1cc5aa48893a-logs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.049135 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-config-data\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.049198 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-config-data-custom\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.049234 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-combined-ca-bundle\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.054793 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf3533a-d21d-43fc-9d81-1cc5aa48893a-logs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.064171 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-config-data\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.067897 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-config-data-custom\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.069803 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-public-tls-certs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.085980 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2gh\" (UniqueName: \"kubernetes.io/projected/baf3533a-d21d-43fc-9d81-1cc5aa48893a-kube-api-access-9d2gh\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.086828 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-combined-ca-bundle\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.087284 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baf3533a-d21d-43fc-9d81-1cc5aa48893a-internal-tls-certs\") pod \"barbican-api-6fc9f9c96b-sqlnv\" (UID: \"baf3533a-d21d-43fc-9d81-1cc5aa48893a\") " pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.206388 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.842843 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fc9f9c96b-sqlnv"] Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.861868 4801 generic.go:334] "Generic (PLEG): container finished" podID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerID="4653d15376ee33d5198432b9f042b1e764ab96a4dba0a739cd8437d1e1cb2225" exitCode=143 Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.862182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0688ae99-2d69-43fc-a729-93f725fa31e6","Type":"ContainerDied","Data":"4653d15376ee33d5198432b9f042b1e764ab96a4dba0a739cd8437d1e1cb2225"} Nov 24 21:29:04 crc kubenswrapper[4801]: I1124 21:29:04.865350 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" event={"ID":"baf3533a-d21d-43fc-9d81-1cc5aa48893a","Type":"ContainerStarted","Data":"56a63f80a7c9085a275a910b52f21050ea7b77c906a56354d122028dfa08f0d6"} Nov 24 21:29:05 crc kubenswrapper[4801]: I1124 21:29:05.859822 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.919739 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" event={"ID":"baf3533a-d21d-43fc-9d81-1cc5aa48893a","Type":"ContainerStarted","Data":"9a368c4fdc9bae37575f96ec7e51198d8dcb1a1f7b9586195507e1b64dfe102c"} Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.922537 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" event={"ID":"baf3533a-d21d-43fc-9d81-1cc5aa48893a","Type":"ContainerStarted","Data":"e92fbf5cd32fb7108a7278caf1951e5593d8bab7689f7df3e1f5ae098e433d62"} Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.922706 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.923074 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.932490 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerStarted","Data":"5cae30b539e373f51778cc1f402c284fbfaab49c01dd767448cf8ca5c6952ecd"} Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.934111 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ebf440f-ae40-48c8-926b-89a0aa42d583","Type":"ContainerStarted","Data":"f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347"} Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.953772 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" podStartSLOduration=3.9537416370000003 podStartE2EDuration="3.953741637s" podCreationTimestamp="2025-11-24 21:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:06.945872481 +0000 UTC m=+1319.028459151" watchObservedRunningTime="2025-11-24 21:29:06.953741637 +0000 UTC m=+1319.036328317" Nov 24 21:29:06 crc kubenswrapper[4801]: I1124 21:29:06.991740 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.056688151 podStartE2EDuration="8.991687431s" podCreationTimestamp="2025-11-24 21:28:58 +0000 UTC" firstStartedPulling="2025-11-24 21:29:00.345678179 +0000 UTC m=+1312.428264849" lastFinishedPulling="2025-11-24 21:29:02.280677449 +0000 UTC m=+1314.363264129" observedRunningTime="2025-11-24 21:29:06.981206635 +0000 UTC m=+1319.063793305" watchObservedRunningTime="2025-11-24 21:29:06.991687431 +0000 UTC m=+1319.074274121" Nov 24 21:29:07 crc kubenswrapper[4801]: I1124 21:29:07.955583 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerStarted","Data":"6fec548245ef1d0d0f5fd1c21302ebd682fbb41b8c911ded17635ae5d6203cf0"} Nov 24 21:29:07 crc kubenswrapper[4801]: I1124 21:29:07.988608 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.016293807 podStartE2EDuration="9.988580446s" podCreationTimestamp="2025-11-24 21:28:58 +0000 UTC" firstStartedPulling="2025-11-24 21:29:00.139602012 +0000 UTC m=+1312.222188682" lastFinishedPulling="2025-11-24 21:29:07.111888611 +0000 UTC m=+1319.194475321" observedRunningTime="2025-11-24 21:29:07.98671163 +0000 UTC m=+1320.069298290" watchObservedRunningTime="2025-11-24 21:29:07.988580446 +0000 UTC m=+1320.071167156" Nov 24 21:29:08 crc kubenswrapper[4801]: I1124 21:29:08.971746 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.435648 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.499358 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.630854 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dpxj5"] Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.631611 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerName="dnsmasq-dns" containerID="cri-o://b26004a0a5144a889fc27f7359107cfd53dad877f301cf1447af946b7fc04aef" gracePeriod=10 Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.806116 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.854802 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:29:09 crc kubenswrapper[4801]: I1124 21:29:09.964673 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.010137 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f8c5d485-f945g" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.017860 4801 generic.go:334] "Generic (PLEG): container finished" podID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerID="b26004a0a5144a889fc27f7359107cfd53dad877f301cf1447af946b7fc04aef" exitCode=0 Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.018434 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" event={"ID":"310fde06-ed0f-4b6d-b928-03b14918aef7","Type":"ContainerDied","Data":"b26004a0a5144a889fc27f7359107cfd53dad877f301cf1447af946b7fc04aef"} Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.091548 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6996d87ddd-ph957"] Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.092154 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6996d87ddd-ph957" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-api" containerID="cri-o://9929003925df02c04b10bb5ba2a7aabe654756684512ad20c8328d14bccc5544" gracePeriod=30 Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.092808 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6996d87ddd-ph957" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-httpd" containerID="cri-o://c1290c152ff34be279b8ba314f8f4364edf48a522b440d9e8c251d1f6d2bd460" gracePeriod=30 Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.111675 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.261004 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.297606 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6pr6\" (UniqueName: \"kubernetes.io/projected/310fde06-ed0f-4b6d-b928-03b14918aef7-kube-api-access-b6pr6\") pod \"310fde06-ed0f-4b6d-b928-03b14918aef7\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.297683 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-swift-storage-0\") pod \"310fde06-ed0f-4b6d-b928-03b14918aef7\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.297765 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-sb\") pod \"310fde06-ed0f-4b6d-b928-03b14918aef7\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.297791 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-nb\") pod \"310fde06-ed0f-4b6d-b928-03b14918aef7\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.297880 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-svc\") pod \"310fde06-ed0f-4b6d-b928-03b14918aef7\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.297933 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-config\") pod \"310fde06-ed0f-4b6d-b928-03b14918aef7\" (UID: \"310fde06-ed0f-4b6d-b928-03b14918aef7\") " Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.323025 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310fde06-ed0f-4b6d-b928-03b14918aef7-kube-api-access-b6pr6" (OuterVolumeSpecName: "kube-api-access-b6pr6") pod "310fde06-ed0f-4b6d-b928-03b14918aef7" (UID: "310fde06-ed0f-4b6d-b928-03b14918aef7"). InnerVolumeSpecName "kube-api-access-b6pr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.400908 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6pr6\" (UniqueName: \"kubernetes.io/projected/310fde06-ed0f-4b6d-b928-03b14918aef7-kube-api-access-b6pr6\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.483137 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "310fde06-ed0f-4b6d-b928-03b14918aef7" (UID: "310fde06-ed0f-4b6d-b928-03b14918aef7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.503522 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "310fde06-ed0f-4b6d-b928-03b14918aef7" (UID: "310fde06-ed0f-4b6d-b928-03b14918aef7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.505504 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.505534 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.508103 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-config" (OuterVolumeSpecName: "config") pod "310fde06-ed0f-4b6d-b928-03b14918aef7" (UID: "310fde06-ed0f-4b6d-b928-03b14918aef7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.555091 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "310fde06-ed0f-4b6d-b928-03b14918aef7" (UID: "310fde06-ed0f-4b6d-b928-03b14918aef7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.570195 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "310fde06-ed0f-4b6d-b928-03b14918aef7" (UID: "310fde06-ed0f-4b6d-b928-03b14918aef7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.608531 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.608575 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:10 crc kubenswrapper[4801]: I1124 21:29:10.608586 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310fde06-ed0f-4b6d-b928-03b14918aef7-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.056652 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="cinder-scheduler" containerID="cri-o://cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e" gracePeriod=30 Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.057285 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="probe" containerID="cri-o://f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347" gracePeriod=30 Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.057517 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.058558 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dpxj5" event={"ID":"310fde06-ed0f-4b6d-b928-03b14918aef7","Type":"ContainerDied","Data":"8f5303990397a22bc6e0e66284d16f9854b9fe2973bb7d4ae0e2cd5fb8d98d45"} Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.058666 4801 scope.go:117] "RemoveContainer" containerID="b26004a0a5144a889fc27f7359107cfd53dad877f301cf1447af946b7fc04aef" Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.108981 4801 scope.go:117] "RemoveContainer" containerID="dc6db00e6050a22412b5c29514bd2ad55ad24c3e539ff6dc8f12237872fe4f0c" Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.120851 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dpxj5"] Nov 24 21:29:11 crc kubenswrapper[4801]: I1124 21:29:11.131516 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dpxj5"] Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.073575 4801 generic.go:334] "Generic (PLEG): container finished" podID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerID="f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347" exitCode=0 Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.073675 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ebf440f-ae40-48c8-926b-89a0aa42d583","Type":"ContainerDied","Data":"f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347"} Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.082023 4801 generic.go:334] "Generic (PLEG): container finished" podID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerID="c1290c152ff34be279b8ba314f8f4364edf48a522b440d9e8c251d1f6d2bd460" exitCode=0 Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.082048 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6996d87ddd-ph957" event={"ID":"4e906613-b24c-4ee1-8b87-b8a7d7d20871","Type":"ContainerDied","Data":"c1290c152ff34be279b8ba314f8f4364edf48a522b440d9e8c251d1f6d2bd460"} Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.537672 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.595130 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-scripts\") pod \"4ebf440f-ae40-48c8-926b-89a0aa42d583\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.595530 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data\") pod \"4ebf440f-ae40-48c8-926b-89a0aa42d583\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.597826 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkll\" (UniqueName: \"kubernetes.io/projected/4ebf440f-ae40-48c8-926b-89a0aa42d583-kube-api-access-gdkll\") pod \"4ebf440f-ae40-48c8-926b-89a0aa42d583\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.598022 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-combined-ca-bundle\") pod \"4ebf440f-ae40-48c8-926b-89a0aa42d583\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.598108 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ebf440f-ae40-48c8-926b-89a0aa42d583-etc-machine-id\") pod \"4ebf440f-ae40-48c8-926b-89a0aa42d583\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.598223 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data-custom\") pod \"4ebf440f-ae40-48c8-926b-89a0aa42d583\" (UID: \"4ebf440f-ae40-48c8-926b-89a0aa42d583\") " Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.601511 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-scripts" (OuterVolumeSpecName: "scripts") pod "4ebf440f-ae40-48c8-926b-89a0aa42d583" (UID: "4ebf440f-ae40-48c8-926b-89a0aa42d583"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.603831 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ebf440f-ae40-48c8-926b-89a0aa42d583" (UID: "4ebf440f-ae40-48c8-926b-89a0aa42d583"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.606749 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ebf440f-ae40-48c8-926b-89a0aa42d583-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4ebf440f-ae40-48c8-926b-89a0aa42d583" (UID: "4ebf440f-ae40-48c8-926b-89a0aa42d583"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.609895 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebf440f-ae40-48c8-926b-89a0aa42d583-kube-api-access-gdkll" (OuterVolumeSpecName: "kube-api-access-gdkll") pod "4ebf440f-ae40-48c8-926b-89a0aa42d583" (UID: "4ebf440f-ae40-48c8-926b-89a0aa42d583"). InnerVolumeSpecName "kube-api-access-gdkll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.684752 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ebf440f-ae40-48c8-926b-89a0aa42d583" (UID: "4ebf440f-ae40-48c8-926b-89a0aa42d583"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.694230 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" path="/var/lib/kubelet/pods/310fde06-ed0f-4b6d-b928-03b14918aef7/volumes" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.707079 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkll\" (UniqueName: \"kubernetes.io/projected/4ebf440f-ae40-48c8-926b-89a0aa42d583-kube-api-access-gdkll\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.707111 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.707121 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ebf440f-ae40-48c8-926b-89a0aa42d583-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.707132 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.707143 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.738678 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data" (OuterVolumeSpecName: "config-data") pod "4ebf440f-ae40-48c8-926b-89a0aa42d583" (UID: "4ebf440f-ae40-48c8-926b-89a0aa42d583"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:12 crc kubenswrapper[4801]: I1124 21:29:12.811509 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ebf440f-ae40-48c8-926b-89a0aa42d583-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.076318 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.117871 4801 generic.go:334] "Generic (PLEG): container finished" podID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerID="cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e" exitCode=0 Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.117921 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ebf440f-ae40-48c8-926b-89a0aa42d583","Type":"ContainerDied","Data":"cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e"} Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.117955 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ebf440f-ae40-48c8-926b-89a0aa42d583","Type":"ContainerDied","Data":"ce08c9ec54b5a1907093f8658c5a8ebf7393c63cca6a15659b3cba2e86b644a5"} Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.117979 4801 scope.go:117] "RemoveContainer" containerID="f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.118059 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.155426 4801 scope.go:117] "RemoveContainer" containerID="cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.199219 4801 scope.go:117] "RemoveContainer" containerID="f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347" Nov 24 21:29:13 crc kubenswrapper[4801]: E1124 21:29:13.202884 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347\": container with ID starting with f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347 not found: ID does not exist" containerID="f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.203022 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347"} err="failed to get container status \"f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347\": rpc error: code = NotFound desc = could not find container \"f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347\": container with ID starting with f0e71828068b75c7a12e0579afc5b81da6b2f5b32b162393e40086d017e37347 not found: ID does not exist" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.203117 4801 scope.go:117] "RemoveContainer" containerID="cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.203275 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:13 crc kubenswrapper[4801]: E1124 21:29:13.203855 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e\": container with ID starting with cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e not found: ID does not exist" containerID="cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.204067 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e"} err="failed to get container status \"cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e\": rpc error: code = NotFound desc = could not find container \"cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e\": container with ID starting with cf474f8aad522a01161860f8a8d03988f8b9dbb9874c961b58202502fe37044e not found: ID does not exist" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.238981 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.258009 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:13 crc kubenswrapper[4801]: E1124 21:29:13.258707 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="cinder-scheduler" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.258732 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="cinder-scheduler" Nov 24 21:29:13 crc kubenswrapper[4801]: E1124 21:29:13.258759 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="probe" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.258767 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="probe" Nov 24 21:29:13 crc kubenswrapper[4801]: E1124 21:29:13.258785 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerName="init" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.258794 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerName="init" Nov 24 21:29:13 crc kubenswrapper[4801]: E1124 21:29:13.258813 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerName="dnsmasq-dns" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.258819 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerName="dnsmasq-dns" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.259085 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="probe" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.259112 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="310fde06-ed0f-4b6d-b928-03b14918aef7" containerName="dnsmasq-dns" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.259130 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" containerName="cinder-scheduler" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.261010 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.264109 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.290562 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.329783 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-config-data\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.329882 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99d06093-c556-40fc-bec9-47096b4c2aa5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.329992 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.330051 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.330156 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mct\" (UniqueName: \"kubernetes.io/projected/99d06093-c556-40fc-bec9-47096b4c2aa5-kube-api-access-45mct\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.330204 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-scripts\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432020 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432099 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432180 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45mct\" (UniqueName: \"kubernetes.io/projected/99d06093-c556-40fc-bec9-47096b4c2aa5-kube-api-access-45mct\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432216 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-scripts\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432246 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-config-data\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99d06093-c556-40fc-bec9-47096b4c2aa5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.432398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99d06093-c556-40fc-bec9-47096b4c2aa5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.436817 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.438459 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-config-data\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.444355 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.444492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99d06093-c556-40fc-bec9-47096b4c2aa5-scripts\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.456706 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45mct\" (UniqueName: \"kubernetes.io/projected/99d06093-c556-40fc-bec9-47096b4c2aa5-kube-api-access-45mct\") pod \"cinder-scheduler-0\" (UID: \"99d06093-c556-40fc-bec9-47096b4c2aa5\") " pod="openstack/cinder-scheduler-0" Nov 24 21:29:13 crc kubenswrapper[4801]: I1124 21:29:13.596556 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:29:14 crc kubenswrapper[4801]: I1124 21:29:14.231261 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:29:14 crc kubenswrapper[4801]: W1124 21:29:14.243177 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99d06093_c556_40fc_bec9_47096b4c2aa5.slice/crio-89cd624afb1193f3adf1be6a8686010a5351d86154058070b45b1120e53d1f7d WatchSource:0}: Error finding container 89cd624afb1193f3adf1be6a8686010a5351d86154058070b45b1120e53d1f7d: Status 404 returned error can't find the container with id 89cd624afb1193f3adf1be6a8686010a5351d86154058070b45b1120e53d1f7d Nov 24 21:29:14 crc kubenswrapper[4801]: I1124 21:29:14.683780 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebf440f-ae40-48c8-926b-89a0aa42d583" path="/var/lib/kubelet/pods/4ebf440f-ae40-48c8-926b-89a0aa42d583/volumes" Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.155152 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"99d06093-c556-40fc-bec9-47096b4c2aa5","Type":"ContainerStarted","Data":"3f0350802b043d054a92872398148dd17487534e68cd6b0202cd49e6b904e5d4"} Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.155229 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"99d06093-c556-40fc-bec9-47096b4c2aa5","Type":"ContainerStarted","Data":"89cd624afb1193f3adf1be6a8686010a5351d86154058070b45b1120e53d1f7d"} Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.795925 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.867165 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fc9f9c96b-sqlnv" Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.972502 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8dd9d95c8-wjfkg"] Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.972850 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8dd9d95c8-wjfkg" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api-log" containerID="cri-o://a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315" gracePeriod=30 Nov 24 21:29:15 crc kubenswrapper[4801]: I1124 21:29:15.972927 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8dd9d95c8-wjfkg" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api" containerID="cri-o://044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79" gracePeriod=30 Nov 24 21:29:16 crc kubenswrapper[4801]: I1124 21:29:16.192709 4801 generic.go:334] "Generic (PLEG): container finished" podID="490f770c-166c-4564-b01b-ce7ccea0356d" containerID="a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315" exitCode=143 Nov 24 21:29:16 crc kubenswrapper[4801]: I1124 21:29:16.192847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8dd9d95c8-wjfkg" event={"ID":"490f770c-166c-4564-b01b-ce7ccea0356d","Type":"ContainerDied","Data":"a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315"} Nov 24 21:29:16 crc kubenswrapper[4801]: I1124 21:29:16.202456 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"99d06093-c556-40fc-bec9-47096b4c2aa5","Type":"ContainerStarted","Data":"eeed236a57429bd0ed56e5b9de9728269b31ed3ff10f73fa4e19da4f3ce33335"} Nov 24 21:29:16 crc kubenswrapper[4801]: I1124 21:29:16.315101 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.315046573 podStartE2EDuration="3.315046573s" podCreationTimestamp="2025-11-24 21:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:16.236928529 +0000 UTC m=+1328.319515199" watchObservedRunningTime="2025-11-24 21:29:16.315046573 +0000 UTC m=+1328.397633233" Nov 24 21:29:17 crc kubenswrapper[4801]: I1124 21:29:17.894259 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:29:17 crc kubenswrapper[4801]: I1124 21:29:17.949262 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66f87bb5dd-vfs99" Nov 24 21:29:18 crc kubenswrapper[4801]: I1124 21:29:18.598132 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 21:29:18 crc kubenswrapper[4801]: I1124 21:29:18.733100 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6c567b6958-cngc5" Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.499907 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8dd9d95c8-wjfkg" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": read tcp 10.217.0.2:37726->10.217.0.202:9311: read: connection reset by peer" Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.500498 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8dd9d95c8-wjfkg" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": read tcp 10.217.0.2:37714->10.217.0.202:9311: read: connection reset by peer" Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.922730 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.944185 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.950960 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.951126 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jwqzr" Nov 24 21:29:19 crc kubenswrapper[4801]: I1124 21:29:19.951397 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.013189 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.111405 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6be94bd-58b9-45e1-a18b-a85a048a5278-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.111536 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5z8\" (UniqueName: \"kubernetes.io/projected/c6be94bd-58b9-45e1-a18b-a85a048a5278-kube-api-access-bs5z8\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.111663 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6be94bd-58b9-45e1-a18b-a85a048a5278-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.111699 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6be94bd-58b9-45e1-a18b-a85a048a5278-openstack-config\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.171752 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.213840 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6be94bd-58b9-45e1-a18b-a85a048a5278-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.213936 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6be94bd-58b9-45e1-a18b-a85a048a5278-openstack-config\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.214055 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6be94bd-58b9-45e1-a18b-a85a048a5278-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.214117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5z8\" (UniqueName: \"kubernetes.io/projected/c6be94bd-58b9-45e1-a18b-a85a048a5278-kube-api-access-bs5z8\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.215333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6be94bd-58b9-45e1-a18b-a85a048a5278-openstack-config\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.223202 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6be94bd-58b9-45e1-a18b-a85a048a5278-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.223376 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6be94bd-58b9-45e1-a18b-a85a048a5278-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.245912 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5z8\" (UniqueName: \"kubernetes.io/projected/c6be94bd-58b9-45e1-a18b-a85a048a5278-kube-api-access-bs5z8\") pod \"openstackclient\" (UID: \"c6be94bd-58b9-45e1-a18b-a85a048a5278\") " pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.268566 4801 generic.go:334] "Generic (PLEG): container finished" podID="490f770c-166c-4564-b01b-ce7ccea0356d" containerID="044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79" exitCode=0 Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.268638 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8dd9d95c8-wjfkg" event={"ID":"490f770c-166c-4564-b01b-ce7ccea0356d","Type":"ContainerDied","Data":"044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79"} Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.268686 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8dd9d95c8-wjfkg" event={"ID":"490f770c-166c-4564-b01b-ce7ccea0356d","Type":"ContainerDied","Data":"94c4f0e19b736ef9fb95e6bea1ffeaa96a9360a342900b16a6407d53fd9067da"} Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.268711 4801 scope.go:117] "RemoveContainer" containerID="044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.269040 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8dd9d95c8-wjfkg" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.290244 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.315550 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/490f770c-166c-4564-b01b-ce7ccea0356d-logs\") pod \"490f770c-166c-4564-b01b-ce7ccea0356d\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.316217 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data\") pod \"490f770c-166c-4564-b01b-ce7ccea0356d\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.316344 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnbq9\" (UniqueName: \"kubernetes.io/projected/490f770c-166c-4564-b01b-ce7ccea0356d-kube-api-access-wnbq9\") pod \"490f770c-166c-4564-b01b-ce7ccea0356d\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.316436 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data-custom\") pod \"490f770c-166c-4564-b01b-ce7ccea0356d\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.316667 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-combined-ca-bundle\") pod \"490f770c-166c-4564-b01b-ce7ccea0356d\" (UID: \"490f770c-166c-4564-b01b-ce7ccea0356d\") " Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.322635 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490f770c-166c-4564-b01b-ce7ccea0356d-logs" (OuterVolumeSpecName: "logs") pod "490f770c-166c-4564-b01b-ce7ccea0356d" (UID: "490f770c-166c-4564-b01b-ce7ccea0356d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.328803 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490f770c-166c-4564-b01b-ce7ccea0356d-kube-api-access-wnbq9" (OuterVolumeSpecName: "kube-api-access-wnbq9") pod "490f770c-166c-4564-b01b-ce7ccea0356d" (UID: "490f770c-166c-4564-b01b-ce7ccea0356d"). InnerVolumeSpecName "kube-api-access-wnbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.333521 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "490f770c-166c-4564-b01b-ce7ccea0356d" (UID: "490f770c-166c-4564-b01b-ce7ccea0356d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.373894 4801 scope.go:117] "RemoveContainer" containerID="a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.375182 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "490f770c-166c-4564-b01b-ce7ccea0356d" (UID: "490f770c-166c-4564-b01b-ce7ccea0356d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.397699 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data" (OuterVolumeSpecName: "config-data") pod "490f770c-166c-4564-b01b-ce7ccea0356d" (UID: "490f770c-166c-4564-b01b-ce7ccea0356d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.404821 4801 scope.go:117] "RemoveContainer" containerID="044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79" Nov 24 21:29:20 crc kubenswrapper[4801]: E1124 21:29:20.405774 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79\": container with ID starting with 044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79 not found: ID does not exist" containerID="044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.405846 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79"} err="failed to get container status \"044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79\": rpc error: code = NotFound desc = could not find container \"044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79\": container with ID starting with 044f1437c09074e20f230436d8bd826254f277bff98467bde1fb28fb1da86a79 not found: ID does not exist" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.405881 4801 scope.go:117] "RemoveContainer" containerID="a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315" Nov 24 21:29:20 crc kubenswrapper[4801]: E1124 21:29:20.408719 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315\": container with ID starting with a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315 not found: ID does not exist" containerID="a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.408770 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315"} err="failed to get container status \"a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315\": rpc error: code = NotFound desc = could not find container \"a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315\": container with ID starting with a250925d09fb1864309153ecb73ba27018197749e1a4dd15f68f7316204b8315 not found: ID does not exist" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.419834 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.419877 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/490f770c-166c-4564-b01b-ce7ccea0356d-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.419890 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.419902 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnbq9\" (UniqueName: \"kubernetes.io/projected/490f770c-166c-4564-b01b-ce7ccea0356d-kube-api-access-wnbq9\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.419911 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/490f770c-166c-4564-b01b-ce7ccea0356d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.706672 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8dd9d95c8-wjfkg"] Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.718007 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8dd9d95c8-wjfkg"] Nov 24 21:29:20 crc kubenswrapper[4801]: I1124 21:29:20.887677 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:29:20 crc kubenswrapper[4801]: W1124 21:29:20.904355 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6be94bd_58b9_45e1_a18b_a85a048a5278.slice/crio-8969a7974d5bbc130f71704701ad7cc083519ae063dfcddda6fffc6bfa54256e WatchSource:0}: Error finding container 8969a7974d5bbc130f71704701ad7cc083519ae063dfcddda6fffc6bfa54256e: Status 404 returned error can't find the container with id 8969a7974d5bbc130f71704701ad7cc083519ae063dfcddda6fffc6bfa54256e Nov 24 21:29:21 crc kubenswrapper[4801]: I1124 21:29:21.282462 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c6be94bd-58b9-45e1-a18b-a85a048a5278","Type":"ContainerStarted","Data":"8969a7974d5bbc130f71704701ad7cc083519ae063dfcddda6fffc6bfa54256e"} Nov 24 21:29:22 crc kubenswrapper[4801]: I1124 21:29:22.685810 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" path="/var/lib/kubelet/pods/490f770c-166c-4564-b01b-ce7ccea0356d/volumes" Nov 24 21:29:23 crc kubenswrapper[4801]: I1124 21:29:23.885899 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 21:29:24 crc kubenswrapper[4801]: I1124 21:29:24.320715 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:29:24 crc kubenswrapper[4801]: I1124 21:29:24.321302 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:24 crc kubenswrapper[4801]: I1124 21:29:24.321457 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:29:24 crc kubenswrapper[4801]: I1124 21:29:24.322627 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71554722c44235bb81cd2780183b2b3394df41c31c6f1cdedb2967dd32989a7b"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:29:24 crc kubenswrapper[4801]: I1124 21:29:24.322721 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://71554722c44235bb81cd2780183b2b3394df41c31c6f1cdedb2967dd32989a7b" gracePeriod=600 Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.361250 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="71554722c44235bb81cd2780183b2b3394df41c31c6f1cdedb2967dd32989a7b" exitCode=0 Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.361280 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"71554722c44235bb81cd2780183b2b3394df41c31c6f1cdedb2967dd32989a7b"} Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.361722 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6"} Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.361766 4801 scope.go:117] "RemoveContainer" containerID="26dfa37555d46211186d9faaf879ca9711d8f0944f7938e017dd598eb1c35e3b" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.733719 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77d858bcc9-bwkzb"] Nov 24 21:29:25 crc kubenswrapper[4801]: E1124 21:29:25.735215 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api-log" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.735241 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api-log" Nov 24 21:29:25 crc kubenswrapper[4801]: E1124 21:29:25.735292 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.735301 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.735608 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.735629 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="490f770c-166c-4564-b01b-ce7ccea0356d" containerName="barbican-api-log" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.737411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.751099 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77d858bcc9-bwkzb"] Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.756864 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.757137 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.757255 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.921174 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-config-data\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.921217 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44f65da7-819a-43dd-9267-2b30cffff0f2-log-httpd\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.921321 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qwr\" (UniqueName: \"kubernetes.io/projected/44f65da7-819a-43dd-9267-2b30cffff0f2-kube-api-access-x9qwr\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.922245 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-combined-ca-bundle\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.922304 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44f65da7-819a-43dd-9267-2b30cffff0f2-run-httpd\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.922349 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-internal-tls-certs\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.922409 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-public-tls-certs\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:25 crc kubenswrapper[4801]: I1124 21:29:25.922486 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/44f65da7-819a-43dd-9267-2b30cffff0f2-etc-swift\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024633 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-public-tls-certs\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024752 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/44f65da7-819a-43dd-9267-2b30cffff0f2-etc-swift\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024775 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44f65da7-819a-43dd-9267-2b30cffff0f2-log-httpd\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024819 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-config-data\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024909 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9qwr\" (UniqueName: \"kubernetes.io/projected/44f65da7-819a-43dd-9267-2b30cffff0f2-kube-api-access-x9qwr\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024942 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-combined-ca-bundle\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.024978 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44f65da7-819a-43dd-9267-2b30cffff0f2-run-httpd\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.025019 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-internal-tls-certs\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.026178 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44f65da7-819a-43dd-9267-2b30cffff0f2-log-httpd\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.026398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44f65da7-819a-43dd-9267-2b30cffff0f2-run-httpd\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.042389 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/44f65da7-819a-43dd-9267-2b30cffff0f2-etc-swift\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.045401 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-combined-ca-bundle\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.045641 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-internal-tls-certs\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.046165 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-config-data\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.046407 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9qwr\" (UniqueName: \"kubernetes.io/projected/44f65da7-819a-43dd-9267-2b30cffff0f2-kube-api-access-x9qwr\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.046982 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f65da7-819a-43dd-9267-2b30cffff0f2-public-tls-certs\") pod \"swift-proxy-77d858bcc9-bwkzb\" (UID: \"44f65da7-819a-43dd-9267-2b30cffff0f2\") " pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.101281 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:26 crc kubenswrapper[4801]: I1124 21:29:26.838656 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77d858bcc9-bwkzb"] Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.407286 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77d858bcc9-bwkzb" event={"ID":"44f65da7-819a-43dd-9267-2b30cffff0f2","Type":"ContainerStarted","Data":"adf4ccef4cf794248e617851a18f7d72862d7d52488d8b810f744474554076c3"} Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.407622 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77d858bcc9-bwkzb" event={"ID":"44f65da7-819a-43dd-9267-2b30cffff0f2","Type":"ContainerStarted","Data":"996963b5a6906b070b169d1bbf7dfd1a45735830d7490e545a2d2d19c3c0659b"} Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.416726 4801 generic.go:334] "Generic (PLEG): container finished" podID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerID="9929003925df02c04b10bb5ba2a7aabe654756684512ad20c8328d14bccc5544" exitCode=0 Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.416790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6996d87ddd-ph957" event={"ID":"4e906613-b24c-4ee1-8b87-b8a7d7d20871","Type":"ContainerDied","Data":"9929003925df02c04b10bb5ba2a7aabe654756684512ad20c8328d14bccc5544"} Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.820035 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.820860 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-central-agent" containerID="cri-o://c17f09dca0e1313f40eb8e5fe1d034aed4d75502d0f09e9173248286ae9227b8" gracePeriod=30 Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.826299 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="proxy-httpd" containerID="cri-o://6fec548245ef1d0d0f5fd1c21302ebd682fbb41b8c911ded17635ae5d6203cf0" gracePeriod=30 Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.826531 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="sg-core" containerID="cri-o://5cae30b539e373f51778cc1f402c284fbfaab49c01dd767448cf8ca5c6952ecd" gracePeriod=30 Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.826540 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-notification-agent" containerID="cri-o://8aae75a4d84e2edf739d14bd43080bf2d9844201d023946ff3e2e703a396dffd" gracePeriod=30 Nov 24 21:29:27 crc kubenswrapper[4801]: I1124 21:29:27.845075 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.367598 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.368293 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-log" containerID="cri-o://1e34f51e8f2b9e9b7d4073db7fee2c72004217cb791ecf489730c3da31b225a2" gracePeriod=30 Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.368456 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-httpd" containerID="cri-o://51897b962c56e28aed948da93494e0ae0aad5c8b6f44ae4e813d7caa6293e935" gracePeriod=30 Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.434261 4801 generic.go:334] "Generic (PLEG): container finished" podID="5e36110e-4c85-489f-bcec-305b72945ad0" containerID="6fec548245ef1d0d0f5fd1c21302ebd682fbb41b8c911ded17635ae5d6203cf0" exitCode=0 Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.434339 4801 generic.go:334] "Generic (PLEG): container finished" podID="5e36110e-4c85-489f-bcec-305b72945ad0" containerID="5cae30b539e373f51778cc1f402c284fbfaab49c01dd767448cf8ca5c6952ecd" exitCode=2 Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.434349 4801 generic.go:334] "Generic (PLEG): container finished" podID="5e36110e-4c85-489f-bcec-305b72945ad0" containerID="c17f09dca0e1313f40eb8e5fe1d034aed4d75502d0f09e9173248286ae9227b8" exitCode=0 Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.434340 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerDied","Data":"6fec548245ef1d0d0f5fd1c21302ebd682fbb41b8c911ded17635ae5d6203cf0"} Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.434479 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerDied","Data":"5cae30b539e373f51778cc1f402c284fbfaab49c01dd767448cf8ca5c6952ecd"} Nov 24 21:29:28 crc kubenswrapper[4801]: I1124 21:29:28.434511 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerDied","Data":"c17f09dca0e1313f40eb8e5fe1d034aed4d75502d0f09e9173248286ae9227b8"} Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.246400 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.203:3000/\": dial tcp 10.217.0.203:3000: connect: connection refused" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.248771 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qfhn4"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.251510 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.288039 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qfhn4"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.340908 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f05c39-1d67-4f70-a64f-fd2d8e160a58-operator-scripts\") pod \"nova-api-db-create-qfhn4\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.340974 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpsq9\" (UniqueName: \"kubernetes.io/projected/80f05c39-1d67-4f70-a64f-fd2d8e160a58-kube-api-access-lpsq9\") pod \"nova-api-db-create-qfhn4\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.349794 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-t4fbs"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.351825 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.364328 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t4fbs"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.445968 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f05c39-1d67-4f70-a64f-fd2d8e160a58-operator-scripts\") pod \"nova-api-db-create-qfhn4\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.446032 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpsq9\" (UniqueName: \"kubernetes.io/projected/80f05c39-1d67-4f70-a64f-fd2d8e160a58-kube-api-access-lpsq9\") pod \"nova-api-db-create-qfhn4\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.446072 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdgh\" (UniqueName: \"kubernetes.io/projected/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-kube-api-access-jrdgh\") pod \"nova-cell0-db-create-t4fbs\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.446099 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-operator-scripts\") pod \"nova-cell0-db-create-t4fbs\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.447009 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f05c39-1d67-4f70-a64f-fd2d8e160a58-operator-scripts\") pod \"nova-api-db-create-qfhn4\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.453233 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f95e-account-create-zl2r5"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.455330 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.456606 4801 generic.go:334] "Generic (PLEG): container finished" podID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerID="1e34f51e8f2b9e9b7d4073db7fee2c72004217cb791ecf489730c3da31b225a2" exitCode=143 Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.456657 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32333c22-5214-46fe-a77e-9268c3fda5a4","Type":"ContainerDied","Data":"1e34f51e8f2b9e9b7d4073db7fee2c72004217cb791ecf489730c3da31b225a2"} Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.476278 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f95e-account-create-zl2r5"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.490960 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.505028 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpsq9\" (UniqueName: \"kubernetes.io/projected/80f05c39-1d67-4f70-a64f-fd2d8e160a58-kube-api-access-lpsq9\") pod \"nova-api-db-create-qfhn4\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.555348 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-operator-scripts\") pod \"nova-cell0-db-create-t4fbs\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.555871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db1ce2cf-197e-41ee-abe5-652d675d160f-operator-scripts\") pod \"nova-api-f95e-account-create-zl2r5\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.556964 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-operator-scripts\") pod \"nova-cell0-db-create-t4fbs\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.570536 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skjm\" (UniqueName: \"kubernetes.io/projected/db1ce2cf-197e-41ee-abe5-652d675d160f-kube-api-access-8skjm\") pod \"nova-api-f95e-account-create-zl2r5\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.574738 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdgh\" (UniqueName: \"kubernetes.io/projected/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-kube-api-access-jrdgh\") pod \"nova-cell0-db-create-t4fbs\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.576334 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.603054 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdgh\" (UniqueName: \"kubernetes.io/projected/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-kube-api-access-jrdgh\") pod \"nova-cell0-db-create-t4fbs\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.641842 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-474vq"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.653997 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.672806 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.680551 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-474vq"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.681492 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8skjm\" (UniqueName: \"kubernetes.io/projected/db1ce2cf-197e-41ee-abe5-652d675d160f-kube-api-access-8skjm\") pod \"nova-api-f95e-account-create-zl2r5\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.681706 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db1ce2cf-197e-41ee-abe5-652d675d160f-operator-scripts\") pod \"nova-api-f95e-account-create-zl2r5\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.691931 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db1ce2cf-197e-41ee-abe5-652d675d160f-operator-scripts\") pod \"nova-api-f95e-account-create-zl2r5\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.694451 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cabb-account-create-8scsl"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.700693 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.703092 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8skjm\" (UniqueName: \"kubernetes.io/projected/db1ce2cf-197e-41ee-abe5-652d675d160f-kube-api-access-8skjm\") pod \"nova-api-f95e-account-create-zl2r5\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.704610 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.719322 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cabb-account-create-8scsl"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.783906 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhkc\" (UniqueName: \"kubernetes.io/projected/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-kube-api-access-6mhkc\") pod \"nova-cell1-db-create-474vq\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.784379 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-operator-scripts\") pod \"nova-cell1-db-create-474vq\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.784940 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-operator-scripts\") pod \"nova-cell0-cabb-account-create-8scsl\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.785700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtw57\" (UniqueName: \"kubernetes.io/projected/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-kube-api-access-wtw57\") pod \"nova-cell0-cabb-account-create-8scsl\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.862443 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ef6a-account-create-d9b7h"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.864318 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.866668 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.875594 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ef6a-account-create-d9b7h"] Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.885341 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.888238 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mhkc\" (UniqueName: \"kubernetes.io/projected/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-kube-api-access-6mhkc\") pod \"nova-cell1-db-create-474vq\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.888342 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-operator-scripts\") pod \"nova-cell1-db-create-474vq\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.888498 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-operator-scripts\") pod \"nova-cell0-cabb-account-create-8scsl\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.888569 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtw57\" (UniqueName: \"kubernetes.io/projected/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-kube-api-access-wtw57\") pod \"nova-cell0-cabb-account-create-8scsl\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.889294 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-operator-scripts\") pod \"nova-cell1-db-create-474vq\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.889473 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-operator-scripts\") pod \"nova-cell0-cabb-account-create-8scsl\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.912202 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtw57\" (UniqueName: \"kubernetes.io/projected/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-kube-api-access-wtw57\") pod \"nova-cell0-cabb-account-create-8scsl\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.936025 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mhkc\" (UniqueName: \"kubernetes.io/projected/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-kube-api-access-6mhkc\") pod \"nova-cell1-db-create-474vq\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.983872 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.991031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7cj\" (UniqueName: \"kubernetes.io/projected/03f744fe-61c0-4179-b7c2-406d3255d02c-kube-api-access-nr7cj\") pod \"nova-cell1-ef6a-account-create-d9b7h\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:29 crc kubenswrapper[4801]: I1124 21:29:29.991196 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03f744fe-61c0-4179-b7c2-406d3255d02c-operator-scripts\") pod \"nova-cell1-ef6a-account-create-d9b7h\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.009064 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.009614 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-log" containerID="cri-o://b90475abb82f1fd03eb3817683c40ba0f834979270b39d32fc43edec03c7d176" gracePeriod=30 Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.010235 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-httpd" containerID="cri-o://ad2e57a16ec696a20855e2e6ea2cf1f6b90322c3a418fa33ea5d564570bcf1b0" gracePeriod=30 Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.072267 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.096354 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr7cj\" (UniqueName: \"kubernetes.io/projected/03f744fe-61c0-4179-b7c2-406d3255d02c-kube-api-access-nr7cj\") pod \"nova-cell1-ef6a-account-create-d9b7h\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.096657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03f744fe-61c0-4179-b7c2-406d3255d02c-operator-scripts\") pod \"nova-cell1-ef6a-account-create-d9b7h\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.097669 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03f744fe-61c0-4179-b7c2-406d3255d02c-operator-scripts\") pod \"nova-cell1-ef6a-account-create-d9b7h\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.120022 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr7cj\" (UniqueName: \"kubernetes.io/projected/03f744fe-61c0-4179-b7c2-406d3255d02c-kube-api-access-nr7cj\") pod \"nova-cell1-ef6a-account-create-d9b7h\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.199227 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.483331 4801 generic.go:334] "Generic (PLEG): container finished" podID="447770ad-de60-4323-95a2-1c530eff1089" containerID="b90475abb82f1fd03eb3817683c40ba0f834979270b39d32fc43edec03c7d176" exitCode=143 Nov 24 21:29:30 crc kubenswrapper[4801]: I1124 21:29:30.483414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"447770ad-de60-4323-95a2-1c530eff1089","Type":"ContainerDied","Data":"b90475abb82f1fd03eb3817683c40ba0f834979270b39d32fc43edec03c7d176"} Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.117582 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5848585774-8jkmz"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.120336 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.131361 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.131841 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-vv9tz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.132008 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.164440 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data-custom\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.164645 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjw6\" (UniqueName: \"kubernetes.io/projected/e0c299ae-e6d9-4828-abeb-0b0627b487bc-kube-api-access-ggjw6\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.164681 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-combined-ca-bundle\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.164719 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.165654 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5848585774-8jkmz"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.268489 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggjw6\" (UniqueName: \"kubernetes.io/projected/e0c299ae-e6d9-4828-abeb-0b0627b487bc-kube-api-access-ggjw6\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.268855 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-combined-ca-bundle\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.268912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.269276 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data-custom\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.275709 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bf64d6fd8-98n98"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.277871 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.283905 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data-custom\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.285767 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.289439 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.308237 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf64d6fd8-98n98"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.309012 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggjw6\" (UniqueName: \"kubernetes.io/projected/e0c299ae-e6d9-4828-abeb-0b0627b487bc-kube-api-access-ggjw6\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.320686 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-combined-ca-bundle\") pod \"heat-engine-5848585774-8jkmz\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.363752 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-99chz"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.388577 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.428237 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-99chz"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.440320 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbsb\" (UniqueName: \"kubernetes.io/projected/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-kube-api-access-tjbsb\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.440478 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.440571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-combined-ca-bundle\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.451845 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data-custom\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.488948 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.531805 4801 generic.go:334] "Generic (PLEG): container finished" podID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerID="51897b962c56e28aed948da93494e0ae0aad5c8b6f44ae4e813d7caa6293e935" exitCode=0 Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.531860 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32333c22-5214-46fe-a77e-9268c3fda5a4","Type":"ContainerDied","Data":"51897b962c56e28aed948da93494e0ae0aad5c8b6f44ae4e813d7caa6293e935"} Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.534666 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5dfc99c655-lvmhc"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.547208 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.556219 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561355 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjbsb\" (UniqueName: \"kubernetes.io/projected/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-kube-api-access-tjbsb\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561485 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppqt\" (UniqueName: \"kubernetes.io/projected/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-kube-api-access-mppqt\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561624 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561726 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-combined-ca-bundle\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561854 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.561942 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-config\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.562057 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.563146 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data-custom\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.579213 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data-custom\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.580475 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.590210 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-combined-ca-bundle\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.593202 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjbsb\" (UniqueName: \"kubernetes.io/projected/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-kube-api-access-tjbsb\") pod \"heat-cfnapi-6bf64d6fd8-98n98\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.605159 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5dfc99c655-lvmhc"] Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667199 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667301 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-config\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667343 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data-custom\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667414 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667540 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks8h\" (UniqueName: \"kubernetes.io/projected/791d7adb-060a-423a-9e14-80279995f1ef-kube-api-access-dks8h\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667563 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-combined-ca-bundle\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667579 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667614 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mppqt\" (UniqueName: \"kubernetes.io/projected/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-kube-api-access-mppqt\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.667631 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.668374 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.668419 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.668619 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-config\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.668835 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.668888 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.669740 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.717142 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mppqt\" (UniqueName: \"kubernetes.io/projected/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-kube-api-access-mppqt\") pod \"dnsmasq-dns-7756b9d78c-99chz\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.764916 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.772212 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data-custom\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.772415 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks8h\" (UniqueName: \"kubernetes.io/projected/791d7adb-060a-423a-9e14-80279995f1ef-kube-api-access-dks8h\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.772453 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-combined-ca-bundle\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.772522 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.779404 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.782892 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-combined-ca-bundle\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.783183 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data-custom\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.786124 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.795924 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks8h\" (UniqueName: \"kubernetes.io/projected/791d7adb-060a-423a-9e14-80279995f1ef-kube-api-access-dks8h\") pod \"heat-api-5dfc99c655-lvmhc\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:32 crc kubenswrapper[4801]: I1124 21:29:32.888921 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.262950 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": read tcp 10.217.0.2:55376->10.217.0.192:9292: read: connection reset by peer" Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.265470 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": read tcp 10.217.0.2:55392->10.217.0.192:9292: read: connection reset by peer" Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.502112 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": dial tcp 10.217.0.193:9292: connect: connection refused" Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.502113 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": dial tcp 10.217.0.193:9292: connect: connection refused" Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.553299 4801 generic.go:334] "Generic (PLEG): container finished" podID="447770ad-de60-4323-95a2-1c530eff1089" containerID="ad2e57a16ec696a20855e2e6ea2cf1f6b90322c3a418fa33ea5d564570bcf1b0" exitCode=0 Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.553407 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"447770ad-de60-4323-95a2-1c530eff1089","Type":"ContainerDied","Data":"ad2e57a16ec696a20855e2e6ea2cf1f6b90322c3a418fa33ea5d564570bcf1b0"} Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.560422 4801 generic.go:334] "Generic (PLEG): container finished" podID="5e36110e-4c85-489f-bcec-305b72945ad0" containerID="8aae75a4d84e2edf739d14bd43080bf2d9844201d023946ff3e2e703a396dffd" exitCode=0 Nov 24 21:29:33 crc kubenswrapper[4801]: I1124 21:29:33.560472 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerDied","Data":"8aae75a4d84e2edf739d14bd43080bf2d9844201d023946ff3e2e703a396dffd"} Nov 24 21:29:34 crc kubenswrapper[4801]: I1124 21:29:34.576259 4801 generic.go:334] "Generic (PLEG): container finished" podID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerID="c6266119f517e8d620284b8218d842f1eab44bc9debb2c6bb61b91e88afc9fb2" exitCode=137 Nov 24 21:29:34 crc kubenswrapper[4801]: I1124 21:29:34.576349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0688ae99-2d69-43fc-a729-93f725fa31e6","Type":"ContainerDied","Data":"c6266119f517e8d620284b8218d842f1eab44bc9debb2c6bb61b91e88afc9fb2"} Nov 24 21:29:34 crc kubenswrapper[4801]: I1124 21:29:34.582910 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.206:8776/healthcheck\": dial tcp 10.217.0.206:8776: connect: connection refused" Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.796608 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.915748 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsd49\" (UniqueName: \"kubernetes.io/projected/4e906613-b24c-4ee1-8b87-b8a7d7d20871-kube-api-access-xsd49\") pod \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.916338 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-ovndb-tls-certs\") pod \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.916467 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-httpd-config\") pod \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.916589 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-config\") pod \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.916847 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-combined-ca-bundle\") pod \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\" (UID: \"4e906613-b24c-4ee1-8b87-b8a7d7d20871\") " Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.937573 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e906613-b24c-4ee1-8b87-b8a7d7d20871-kube-api-access-xsd49" (OuterVolumeSpecName: "kube-api-access-xsd49") pod "4e906613-b24c-4ee1-8b87-b8a7d7d20871" (UID: "4e906613-b24c-4ee1-8b87-b8a7d7d20871"). InnerVolumeSpecName "kube-api-access-xsd49". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:36 crc kubenswrapper[4801]: I1124 21:29:36.943491 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4e906613-b24c-4ee1-8b87-b8a7d7d20871" (UID: "4e906613-b24c-4ee1-8b87-b8a7d7d20871"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.021821 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.021887 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsd49\" (UniqueName: \"kubernetes.io/projected/4e906613-b24c-4ee1-8b87-b8a7d7d20871-kube-api-access-xsd49\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.092267 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e906613-b24c-4ee1-8b87-b8a7d7d20871" (UID: "4e906613-b24c-4ee1-8b87-b8a7d7d20871"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.113298 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-config" (OuterVolumeSpecName: "config") pod "4e906613-b24c-4ee1-8b87-b8a7d7d20871" (UID: "4e906613-b24c-4ee1-8b87-b8a7d7d20871"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.125032 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.125072 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.134530 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4e906613-b24c-4ee1-8b87-b8a7d7d20871" (UID: "4e906613-b24c-4ee1-8b87-b8a7d7d20871"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.228742 4801 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e906613-b24c-4ee1-8b87-b8a7d7d20871-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.728670 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6996d87ddd-ph957" event={"ID":"4e906613-b24c-4ee1-8b87-b8a7d7d20871","Type":"ContainerDied","Data":"5fee1ec9a9ee7cede16a7f56c7854b4e9debe40a2ce9e5e0186c71a1d0599e88"} Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.729299 4801 scope.go:117] "RemoveContainer" containerID="c1290c152ff34be279b8ba314f8f4364edf48a522b440d9e8c251d1f6d2bd460" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.729483 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6996d87ddd-ph957" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.740002 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32333c22-5214-46fe-a77e-9268c3fda5a4","Type":"ContainerDied","Data":"c9fe6f02f3f2f2e1d39bd5aa2e0919616526929c0370d621a700a688acc33909"} Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.740071 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9fe6f02f3f2f2e1d39bd5aa2e0919616526929c0370d621a700a688acc33909" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.752801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0688ae99-2d69-43fc-a729-93f725fa31e6","Type":"ContainerDied","Data":"dc1c1a2796aa6becd1555e9f558f2470cba2e6ce38ceae7cbe0d1651f5199af9"} Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.752864 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc1c1a2796aa6becd1555e9f558f2470cba2e6ce38ceae7cbe0d1651f5199af9" Nov 24 21:29:37 crc kubenswrapper[4801]: I1124 21:29:37.990219 4801 scope.go:117] "RemoveContainer" containerID="9929003925df02c04b10bb5ba2a7aabe654756684512ad20c8328d14bccc5544" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.013774 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6996d87ddd-ph957"] Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.019840 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.025123 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.025709 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6996d87ddd-ph957"] Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156432 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-httpd-run\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156516 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156619 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-526dl\" (UniqueName: \"kubernetes.io/projected/0688ae99-2d69-43fc-a729-93f725fa31e6-kube-api-access-526dl\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156677 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-combined-ca-bundle\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156761 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data-custom\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156791 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-scripts\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156807 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-combined-ca-bundle\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156895 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-scripts\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.156919 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.157019 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-public-tls-certs\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.157039 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0688ae99-2d69-43fc-a729-93f725fa31e6-logs\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.157092 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7lz9\" (UniqueName: \"kubernetes.io/projected/32333c22-5214-46fe-a77e-9268c3fda5a4-kube-api-access-n7lz9\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.157168 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-config-data\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.157195 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-logs\") pod \"32333c22-5214-46fe-a77e-9268c3fda5a4\" (UID: \"32333c22-5214-46fe-a77e-9268c3fda5a4\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.157268 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0688ae99-2d69-43fc-a729-93f725fa31e6-etc-machine-id\") pod \"0688ae99-2d69-43fc-a729-93f725fa31e6\" (UID: \"0688ae99-2d69-43fc-a729-93f725fa31e6\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.158024 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0688ae99-2d69-43fc-a729-93f725fa31e6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.158203 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0688ae99-2d69-43fc-a729-93f725fa31e6-logs" (OuterVolumeSpecName: "logs") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.159796 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.159821 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-logs" (OuterVolumeSpecName: "logs") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.186733 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-scripts" (OuterVolumeSpecName: "scripts") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.187348 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-scripts" (OuterVolumeSpecName: "scripts") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.203198 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32333c22-5214-46fe-a77e-9268c3fda5a4-kube-api-access-n7lz9" (OuterVolumeSpecName: "kube-api-access-n7lz9") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "kube-api-access-n7lz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.223755 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.224074 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.224290 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0688ae99-2d69-43fc-a729-93f725fa31e6-kube-api-access-526dl" (OuterVolumeSpecName: "kube-api-access-526dl") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "kube-api-access-526dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.261608 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.261649 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.261976 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262012 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262025 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0688ae99-2d69-43fc-a729-93f725fa31e6-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262035 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7lz9\" (UniqueName: \"kubernetes.io/projected/32333c22-5214-46fe-a77e-9268c3fda5a4-kube-api-access-n7lz9\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262049 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262059 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0688ae99-2d69-43fc-a729-93f725fa31e6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262069 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32333c22-5214-46fe-a77e-9268c3fda5a4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.262078 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-526dl\" (UniqueName: \"kubernetes.io/projected/0688ae99-2d69-43fc-a729-93f725fa31e6-kube-api-access-526dl\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.279773 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.287343 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.316185 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.319141 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data" (OuterVolumeSpecName: "config-data") pod "0688ae99-2d69-43fc-a729-93f725fa31e6" (UID: "0688ae99-2d69-43fc-a729-93f725fa31e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.319596 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-config-data" (OuterVolumeSpecName: "config-data") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.337193 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "32333c22-5214-46fe-a77e-9268c3fda5a4" (UID: "32333c22-5214-46fe-a77e-9268c3fda5a4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.365515 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.365556 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.365568 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.365578 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.365589 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32333c22-5214-46fe-a77e-9268c3fda5a4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.365599 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688ae99-2d69-43fc-a729-93f725fa31e6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.564003 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.686494 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-sg-core-conf-yaml\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.686872 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-run-httpd\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.686976 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ps4\" (UniqueName: \"kubernetes.io/projected/5e36110e-4c85-489f-bcec-305b72945ad0-kube-api-access-v5ps4\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.687075 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-scripts\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.687382 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-combined-ca-bundle\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.687515 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-log-httpd\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.687641 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-config-data\") pod \"5e36110e-4c85-489f-bcec-305b72945ad0\" (UID: \"5e36110e-4c85-489f-bcec-305b72945ad0\") " Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.691969 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.697470 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.706027 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" path="/var/lib/kubelet/pods/4e906613-b24c-4ee1-8b87-b8a7d7d20871/volumes" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.717655 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-scripts" (OuterVolumeSpecName: "scripts") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.720258 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e36110e-4c85-489f-bcec-305b72945ad0-kube-api-access-v5ps4" (OuterVolumeSpecName: "kube-api-access-v5ps4") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "kube-api-access-v5ps4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.800356 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.800416 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e36110e-4c85-489f-bcec-305b72945ad0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.800430 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5ps4\" (UniqueName: \"kubernetes.io/projected/5e36110e-4c85-489f-bcec-305b72945ad0-kube-api-access-v5ps4\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.800444 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.863341 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.903450 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.904688 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.957973 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podUID="44f65da7-819a-43dd-9267-2b30cffff0f2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.965706 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:38 crc kubenswrapper[4801]: I1124 21:29:38.985115 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podStartSLOduration=13.985067004 podStartE2EDuration="13.985067004s" podCreationTimestamp="2025-11-24 21:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:38.930833311 +0000 UTC m=+1351.013419981" watchObservedRunningTime="2025-11-24 21:29:38.985067004 +0000 UTC m=+1351.067653674" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.013545 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.088302 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.100737 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.885903592 podStartE2EDuration="20.100704688s" podCreationTimestamp="2025-11-24 21:29:19 +0000 UTC" firstStartedPulling="2025-11-24 21:29:20.908836634 +0000 UTC m=+1332.991423304" lastFinishedPulling="2025-11-24 21:29:37.12363773 +0000 UTC m=+1349.206224400" observedRunningTime="2025-11-24 21:29:39.005072777 +0000 UTC m=+1351.087659447" watchObservedRunningTime="2025-11-24 21:29:39.100704688 +0000 UTC m=+1351.183291368" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.136985 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.145759 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.145815 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.145845 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e36110e-4c85-489f-bcec-305b72945ad0","Type":"ContainerDied","Data":"7c947ad72a8d7f24dc122cf613e6c56e3e4e50b21de64956b5951e1b047cab0b"} Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.145882 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c6be94bd-58b9-45e1-a18b-a85a048a5278","Type":"ContainerStarted","Data":"b3fb966107ab9d715dc2cd68f1dcf81fedd7fe5c6aedfb1d3b99ebc6ca4629f1"} Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.145917 4801 scope.go:117] "RemoveContainer" containerID="6fec548245ef1d0d0f5fd1c21302ebd682fbb41b8c911ded17635ae5d6203cf0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.152621 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5848585774-8jkmz"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.177330 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.191758 4801 scope.go:117] "RemoveContainer" containerID="5cae30b539e373f51778cc1f402c284fbfaab49c01dd767448cf8ca5c6952ecd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.267098 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.285008 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-config-data" (OuterVolumeSpecName: "config-data") pod "5e36110e-4c85-489f-bcec-305b72945ad0" (UID: "5e36110e-4c85-489f-bcec-305b72945ad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.288414 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313066 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313752 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api-log" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313774 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api-log" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313789 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-log" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313796 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-log" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313815 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313821 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313838 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-central-agent" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313845 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-central-agent" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313861 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313868 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313882 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313889 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313901 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313908 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313918 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="sg-core" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313924 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="sg-core" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313943 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-api" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313950 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-api" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313959 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-notification-agent" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313966 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-notification-agent" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.313986 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-log" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.313992 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-log" Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.314012 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="proxy-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314018 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="proxy-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314248 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api-log" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314258 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" containerName="cinder-api" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314272 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-notification-agent" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314286 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="proxy-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314301 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-api" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314312 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314319 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="sg-core" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314328 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" containerName="glance-log" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314335 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314345 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-httpd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314356 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="447770ad-de60-4323-95a2-1c530eff1089" containerName="glance-log" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.314384 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" containerName="ceilometer-central-agent" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.327118 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.330463 4801 scope.go:117] "RemoveContainer" containerID="8aae75a4d84e2edf739d14bd43080bf2d9844201d023946ff3e2e703a396dffd" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.336343 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.336621 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.358910 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.365864 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-scripts\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.365931 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-httpd-run\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.366088 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-logs\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.366156 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kklwj\" (UniqueName: \"kubernetes.io/projected/447770ad-de60-4323-95a2-1c530eff1089-kube-api-access-kklwj\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.366278 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-combined-ca-bundle\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.366328 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-internal-tls-certs\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.366347 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.366531 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-config-data\") pod \"447770ad-de60-4323-95a2-1c530eff1089\" (UID: \"447770ad-de60-4323-95a2-1c530eff1089\") " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.367775 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e36110e-4c85-489f-bcec-305b72945ad0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.379407 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.385531 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-logs" (OuterVolumeSpecName: "logs") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.385600 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.391081 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.391251 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-scripts" (OuterVolumeSpecName: "scripts") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.409693 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447770ad-de60-4323-95a2-1c530eff1089-kube-api-access-kklwj" (OuterVolumeSpecName: "kube-api-access-kklwj") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "kube-api-access-kklwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.410592 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: E1124 21:29:39.411317 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32333c22_5214_46fe_a77e_9268c3fda5a4.slice/crio-c9fe6f02f3f2f2e1d39bd5aa2e0919616526929c0370d621a700a688acc33909\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32333c22_5214_46fe_a77e_9268c3fda5a4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0688ae99_2d69_43fc_a729_93f725fa31e6.slice/crio-dc1c1a2796aa6becd1555e9f558f2470cba2e6ce38ceae7cbe0d1651f5199af9\": RecentStats: unable to find data in memory cache]" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.414584 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.417852 4801 scope.go:117] "RemoveContainer" containerID="c17f09dca0e1313f40eb8e5fe1d034aed4d75502d0f09e9173248286ae9227b8" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.427086 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.427442 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.435096 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.435313 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.441094 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.471916 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472399 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472487 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kn8v\" (UniqueName: \"kubernetes.io/projected/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-kube-api-access-6kn8v\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472524 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472568 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472588 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472636 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472656 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-logs\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472885 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472910 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472924 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447770ad-de60-4323-95a2-1c530eff1089-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472936 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kklwj\" (UniqueName: \"kubernetes.io/projected/447770ad-de60-4323-95a2-1c530eff1089-kube-api-access-kklwj\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.472963 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.478243 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.537600 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.576343 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5c15f16-617b-4244-a628-baaf35de4f1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586349 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586460 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586618 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586745 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586782 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586888 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-scripts\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586919 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c15f16-617b-4244-a628-baaf35de4f1f-logs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586961 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kn8v\" (UniqueName: \"kubernetes.io/projected/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-kube-api-access-6kn8v\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.586990 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-config-data\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587035 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587062 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587172 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587197 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-logs\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.587344 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tm9c\" (UniqueName: \"kubernetes.io/projected/f5c15f16-617b-4244-a628-baaf35de4f1f-kube-api-access-4tm9c\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.598920 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.599862 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-logs\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.600092 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.613679 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.615062 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.615884 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.656788 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.664529 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kn8v\" (UniqueName: \"kubernetes.io/projected/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-kube-api-access-6kn8v\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.666967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.668099 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.688544 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f8c36c-e543-4ea8-972a-1c9fe6ba022f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.702265 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720315 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720411 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720463 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-scripts\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720486 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c15f16-617b-4244-a628-baaf35de4f1f-logs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720512 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-config-data\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720615 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tm9c\" (UniqueName: \"kubernetes.io/projected/f5c15f16-617b-4244-a628-baaf35de4f1f-kube-api-access-4tm9c\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720641 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5c15f16-617b-4244-a628-baaf35de4f1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720690 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.720720 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.724214 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c15f16-617b-4244-a628-baaf35de4f1f-logs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.728886 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5c15f16-617b-4244-a628-baaf35de4f1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.763597 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.766833 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.773083 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.773135 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.776249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.790903 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.794294 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.795764 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tm9c\" (UniqueName: \"kubernetes.io/projected/f5c15f16-617b-4244-a628-baaf35de4f1f-kube-api-access-4tm9c\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.803836 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-config-data\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.811643 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.834138 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5c15f16-617b-4244-a628-baaf35de4f1f-scripts\") pod \"cinder-api-0\" (UID: \"f5c15f16-617b-4244-a628-baaf35de4f1f\") " pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.834250 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.835138 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.865009 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.893902 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-config-data" (OuterVolumeSpecName: "config-data") pod "447770ad-de60-4323-95a2-1c530eff1089" (UID: "447770ad-de60-4323-95a2-1c530eff1089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.918663 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f\") " pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.935121 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-scripts\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.935197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.937706 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-run-httpd\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.937943 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-log-httpd\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.938083 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdn4t\" (UniqueName: \"kubernetes.io/projected/9540639f-3226-4ed7-b540-1825b4d9b279-kube-api-access-pdn4t\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.938403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-config-data\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.938489 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.938753 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.938773 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447770ad-de60-4323-95a2-1c530eff1089-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.965526 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77d858bcc9-bwkzb" event={"ID":"44f65da7-819a-43dd-9267-2b30cffff0f2","Type":"ContainerStarted","Data":"30bd4194b700235c9884447bb8ab906d5fc1cf02205d3aafc5060b76f841c950"} Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.981190 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"447770ad-de60-4323-95a2-1c530eff1089","Type":"ContainerDied","Data":"0bbc7f6088f30626c5335f85ef7b623f3f95228024d0fabad0808acbb83f5a09"} Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.981258 4801 scope.go:117] "RemoveContainer" containerID="ad2e57a16ec696a20855e2e6ea2cf1f6b90322c3a418fa33ea5d564570bcf1b0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.981553 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.982330 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:29:39 crc kubenswrapper[4801]: I1124 21:29:39.989790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5848585774-8jkmz" event={"ID":"e0c299ae-e6d9-4828-abeb-0b0627b487bc","Type":"ContainerStarted","Data":"3db99259ce9a678d96bd02665dfb9be1442b2e7815858a2b1521a0d505a3099f"} Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.012332 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podUID="44f65da7-819a-43dd-9267-2b30cffff0f2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.136278 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.158919 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-scripts\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.159018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.159053 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-run-httpd\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.159225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-log-httpd\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.159375 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdn4t\" (UniqueName: \"kubernetes.io/projected/9540639f-3226-4ed7-b540-1825b4d9b279-kube-api-access-pdn4t\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.159658 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-config-data\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.160296 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-log-httpd\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.181116 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.189991 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.193500 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-scripts\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.197189 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdn4t\" (UniqueName: \"kubernetes.io/projected/9540639f-3226-4ed7-b540-1825b4d9b279-kube-api-access-pdn4t\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.199687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-run-httpd\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.200746 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-config-data\") pod \"ceilometer-0\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.218707 4801 scope.go:117] "RemoveContainer" containerID="b90475abb82f1fd03eb3817683c40ba0f834979270b39d32fc43edec03c7d176" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.252298 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t4fbs"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.320927 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-474vq"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.354800 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5dfc99c655-lvmhc"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.364139 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f95e-account-create-zl2r5"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.381558 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qfhn4"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.417386 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf64d6fd8-98n98"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.423538 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.432680 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.445559 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.465843 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cabb-account-create-8scsl"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.484324 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.523334 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ef6a-account-create-d9b7h"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.523511 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.527921 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.551844 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.562380 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-99chz"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.597401 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.703235 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0688ae99-2d69-43fc-a729-93f725fa31e6" path="/var/lib/kubelet/pods/0688ae99-2d69-43fc-a729-93f725fa31e6/volumes" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.718094 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32333c22-5214-46fe-a77e-9268c3fda5a4" path="/var/lib/kubelet/pods/32333c22-5214-46fe-a77e-9268c3fda5a4/volumes" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.728729 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.728774 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.728811 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896d841d-ba4e-483a-b586-53227f9a9546-logs\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.728913 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-scripts\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.728934 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfnx\" (UniqueName: \"kubernetes.io/projected/896d841d-ba4e-483a-b586-53227f9a9546-kube-api-access-4qfnx\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.729121 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.729143 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-config-data\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.731509 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447770ad-de60-4323-95a2-1c530eff1089" path="/var/lib/kubelet/pods/447770ad-de60-4323-95a2-1c530eff1089/volumes" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.734080 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896d841d-ba4e-483a-b586-53227f9a9546-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.742551 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e36110e-4c85-489f-bcec-305b72945ad0" path="/var/lib/kubelet/pods/5e36110e-4c85-489f-bcec-305b72945ad0/volumes" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.849057 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896d841d-ba4e-483a-b586-53227f9a9546-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.849201 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.849227 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.849254 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896d841d-ba4e-483a-b586-53227f9a9546-logs\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.849382 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-scripts\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.849406 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfnx\" (UniqueName: \"kubernetes.io/projected/896d841d-ba4e-483a-b586-53227f9a9546-kube-api-access-4qfnx\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.850818 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.850866 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-config-data\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.852137 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896d841d-ba4e-483a-b586-53227f9a9546-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.852993 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896d841d-ba4e-483a-b586-53227f9a9546-logs\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.854220 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.866236 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.870337 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.873530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-config-data\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.876546 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896d841d-ba4e-483a-b586-53227f9a9546-scripts\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.881329 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfnx\" (UniqueName: \"kubernetes.io/projected/896d841d-ba4e-483a-b586-53227f9a9546-kube-api-access-4qfnx\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.916451 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"896d841d-ba4e-483a-b586-53227f9a9546\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:29:40 crc kubenswrapper[4801]: I1124 21:29:40.973940 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.054553 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-474vq" event={"ID":"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d","Type":"ContainerStarted","Data":"9ff92529f8561cd50837702dd07f055991147967f023612795078e60554b3c55"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.089040 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfc99c655-lvmhc" event={"ID":"791d7adb-060a-423a-9e14-80279995f1ef","Type":"ContainerStarted","Data":"b198ec1f511d8c0429e705b5b809d2af6c7c06c9c882275bb6fe0c970b79d7bc"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.097877 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4fbs" event={"ID":"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd","Type":"ContainerStarted","Data":"fe8cd9df69776439c51ff19a0c005a4894fa0fde870e794667f0116860ae2413"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.117522 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cabb-account-create-8scsl" event={"ID":"2e075bbc-6e1b-4d86-a6c2-ae3de8941695","Type":"ContainerStarted","Data":"e3c3046e0ed659075d81ef826d48db696896650f09c8a798cd7a8957424529ca"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.133679 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" event={"ID":"2e02e8a2-d652-4aee-9d0b-6d8823dabb84","Type":"ContainerStarted","Data":"2163f07c8aa27820f18093754873f2c3eb0ffee99540482cec9bc2a8719e4124"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.153409 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podUID="44f65da7-819a-43dd-9267-2b30cffff0f2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.153560 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podUID="44f65da7-819a-43dd-9267-2b30cffff0f2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.158797 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" event={"ID":"03f744fe-61c0-4179-b7c2-406d3255d02c","Type":"ContainerStarted","Data":"ce900df2ed44cb439d409ac1c7eaac4ab6fcca08b84e0128e1c3d1295c8d8012"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.182102 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" event={"ID":"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7","Type":"ContainerStarted","Data":"2e17b2ea0687484b698ce7586fc3cf7e08c8949d52ffc2861c399165f726bf02"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.187691 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5848585774-8jkmz" event={"ID":"e0c299ae-e6d9-4828-abeb-0b0627b487bc","Type":"ContainerStarted","Data":"56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.188065 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.199064 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f95e-account-create-zl2r5" event={"ID":"db1ce2cf-197e-41ee-abe5-652d675d160f","Type":"ContainerStarted","Data":"91f8b233221e1ab577e4e3f7f1b4d126101d637975734f6f362033f8931781c7"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.207910 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qfhn4" event={"ID":"80f05c39-1d67-4f70-a64f-fd2d8e160a58","Type":"ContainerStarted","Data":"6f9e5fbf11e50d6ff9f2101f400aa01c76e67a7c568a163a86f920b9a5d9f485"} Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.210697 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.214293 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podUID="44f65da7-819a-43dd-9267-2b30cffff0f2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.250183 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5848585774-8jkmz" podStartSLOduration=9.250157547 podStartE2EDuration="9.250157547s" podCreationTimestamp="2025-11-24 21:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:41.207165812 +0000 UTC m=+1353.289752482" watchObservedRunningTime="2025-11-24 21:29:41.250157547 +0000 UTC m=+1353.332744217" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.347758 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-fd8776ddc-8dg24"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.356222 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.382590 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-c45b8fb99-cff82"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.391249 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.436671 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fd8776ddc-8dg24"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.473869 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.473960 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data-custom\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.474100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-combined-ca-bundle\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.474181 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s4bq\" (UniqueName: \"kubernetes.io/projected/b0a3564b-3b89-49cc-b2e4-908ef21839d8-kube-api-access-6s4bq\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.535181 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-c45b8fb99-cff82"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.655351 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.655587 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data-custom\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.655919 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-combined-ca-bundle\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.656137 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-combined-ca-bundle\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.668645 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s4bq\" (UniqueName: \"kubernetes.io/projected/b0a3564b-3b89-49cc-b2e4-908ef21839d8-kube-api-access-6s4bq\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.669111 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data-custom\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.669177 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kzfj\" (UniqueName: \"kubernetes.io/projected/b2db7465-73a5-4b4c-97ac-416d05659962-kube-api-access-4kzfj\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.669255 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.675672 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data-custom\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.676125 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-combined-ca-bundle\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.678589 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.739137 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s4bq\" (UniqueName: \"kubernetes.io/projected/b0a3564b-3b89-49cc-b2e4-908ef21839d8-kube-api-access-6s4bq\") pod \"heat-cfnapi-fd8776ddc-8dg24\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.757117 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-555fc74868-qx28k"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.769958 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.774954 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data-custom\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.774995 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kzfj\" (UniqueName: \"kubernetes.io/projected/b2db7465-73a5-4b4c-97ac-416d05659962-kube-api-access-4kzfj\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.775046 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.775225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-combined-ca-bundle\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.787036 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-combined-ca-bundle\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.800490 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.807907 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data-custom\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.810653 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-555fc74868-qx28k"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.823408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kzfj\" (UniqueName: \"kubernetes.io/projected/b2db7465-73a5-4b4c-97ac-416d05659962-kube-api-access-4kzfj\") pod \"heat-engine-c45b8fb99-cff82\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.879697 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnrp\" (UniqueName: \"kubernetes.io/projected/beb2d417-d295-4780-9946-3860c12149ed-kube-api-access-fbnrp\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.879922 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data-custom\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.879948 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.880019 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-combined-ca-bundle\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: W1124 21:29:41.889418 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9540639f_3226_4ed7_b540_1825b4d9b279.slice/crio-1d7c2cc07aac5171c2ec36448f983c10454439da4aa0035cac204c9e5390190b WatchSource:0}: Error finding container 1d7c2cc07aac5171c2ec36448f983c10454439da4aa0035cac204c9e5390190b: Status 404 returned error can't find the container with id 1d7c2cc07aac5171c2ec36448f983c10454439da4aa0035cac204c9e5390190b Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.912818 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.984384 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbnrp\" (UniqueName: \"kubernetes.io/projected/beb2d417-d295-4780-9946-3860c12149ed-kube-api-access-fbnrp\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.985043 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.985074 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data-custom\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.985163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-combined-ca-bundle\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:41 crc kubenswrapper[4801]: I1124 21:29:41.995932 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.002266 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-combined-ca-bundle\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.002343 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.006139 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbnrp\" (UniqueName: \"kubernetes.io/projected/beb2d417-d295-4780-9946-3860c12149ed-kube-api-access-fbnrp\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.014704 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data-custom\") pod \"heat-api-555fc74868-qx28k\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.016521 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.048787 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.332519 4801 generic.go:334] "Generic (PLEG): container finished" podID="80f05c39-1d67-4f70-a64f-fd2d8e160a58" containerID="12fdaea0a45081b60c5dfc618e645014f4d19e001ed21add99965fd0c231392e" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.333690 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qfhn4" event={"ID":"80f05c39-1d67-4f70-a64f-fd2d8e160a58","Type":"ContainerDied","Data":"12fdaea0a45081b60c5dfc618e645014f4d19e001ed21add99965fd0c231392e"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.353232 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.365607 4801 generic.go:334] "Generic (PLEG): container finished" podID="2e075bbc-6e1b-4d86-a6c2-ae3de8941695" containerID="8385206f427c184de8ec2b3fb01dccad41edaeec6aa3612792b2b43dff97dd0b" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.365704 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cabb-account-create-8scsl" event={"ID":"2e075bbc-6e1b-4d86-a6c2-ae3de8941695","Type":"ContainerDied","Data":"8385206f427c184de8ec2b3fb01dccad41edaeec6aa3612792b2b43dff97dd0b"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.385609 4801 generic.go:334] "Generic (PLEG): container finished" podID="db1ce2cf-197e-41ee-abe5-652d675d160f" containerID="86b6beec5b549f854c1ff6db8775b70d3d8e02222ba1cd0b37eb601fabcc672d" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.385907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f95e-account-create-zl2r5" event={"ID":"db1ce2cf-197e-41ee-abe5-652d675d160f","Type":"ContainerDied","Data":"86b6beec5b549f854c1ff6db8775b70d3d8e02222ba1cd0b37eb601fabcc672d"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.415469 4801 generic.go:334] "Generic (PLEG): container finished" podID="1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" containerID="99b582b2fe5975c7a35ad36ccf6a0b081f22b7c1e29dc3f4689060380f327e39" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.415565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4fbs" event={"ID":"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd","Type":"ContainerDied","Data":"99b582b2fe5975c7a35ad36ccf6a0b081f22b7c1e29dc3f4689060380f327e39"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.439182 4801 generic.go:334] "Generic (PLEG): container finished" podID="03f744fe-61c0-4179-b7c2-406d3255d02c" containerID="d96577b4d9e28a423bfe3b2de2d0a36299c0d16403c06338d62681f784a5b363" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.439259 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" event={"ID":"03f744fe-61c0-4179-b7c2-406d3255d02c","Type":"ContainerDied","Data":"d96577b4d9e28a423bfe3b2de2d0a36299c0d16403c06338d62681f784a5b363"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.441875 4801 generic.go:334] "Generic (PLEG): container finished" podID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerID="20981b6b52523d658876b84226b0a4412d0256c60020c536e52669358693b36e" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.441932 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" event={"ID":"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7","Type":"ContainerDied","Data":"20981b6b52523d658876b84226b0a4412d0256c60020c536e52669358693b36e"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.465053 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5c15f16-617b-4244-a628-baaf35de4f1f","Type":"ContainerStarted","Data":"dfd85a61f91f9401ea6ce282bd9396af0676c73454163eeb42bacfc5edf10c64"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.526012 4801 generic.go:334] "Generic (PLEG): container finished" podID="f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" containerID="3da80d0a2ee04ecfe2af360b41cf934a4ef5ba8ee4feb894dd4fbb1b81d96ec8" exitCode=0 Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.526075 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-474vq" event={"ID":"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d","Type":"ContainerDied","Data":"3da80d0a2ee04ecfe2af360b41cf934a4ef5ba8ee4feb894dd4fbb1b81d96ec8"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.562539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerStarted","Data":"1d7c2cc07aac5171c2ec36448f983c10454439da4aa0035cac204c9e5390190b"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.572118 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f","Type":"ContainerStarted","Data":"e001340a73506e62afd159924cad364491ddd7f0629856d0e8ead8737418d10d"} Nov 24 21:29:42 crc kubenswrapper[4801]: I1124 21:29:42.628105 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.150260 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fd8776ddc-8dg24"] Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.628022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" event={"ID":"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7","Type":"ContainerStarted","Data":"81194875a4db0e96d7575732dc9ac2712fe2777611fb98bce9815e63167fca24"} Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.629029 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.632680 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-c45b8fb99-cff82"] Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.650183 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5c15f16-617b-4244-a628-baaf35de4f1f","Type":"ContainerStarted","Data":"a653bf03b226bbc6cdf284ea4eadd05c69ef8b5284b54510731784d2276d6728"} Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.661040 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896d841d-ba4e-483a-b586-53227f9a9546","Type":"ContainerStarted","Data":"ccda60fa8bbbdabbda7214c76a33fd09abca6b6f140143522151c9170d84e3d6"} Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.675841 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" event={"ID":"b0a3564b-3b89-49cc-b2e4-908ef21839d8","Type":"ContainerStarted","Data":"55576c04feb443b074482100baf0690a27bc3b31469512e46ef1d35f7031e795"} Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.685265 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerStarted","Data":"6cc23876ac1f106586c463f71eba883afd5b1e071376c0465754e232470699be"} Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.728879 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-555fc74868-qx28k"] Nov 24 21:29:43 crc kubenswrapper[4801]: I1124 21:29:43.780914 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" podStartSLOduration=11.780886731 podStartE2EDuration="11.780886731s" podCreationTimestamp="2025-11-24 21:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:43.66365889 +0000 UTC m=+1355.746245560" watchObservedRunningTime="2025-11-24 21:29:43.780886731 +0000 UTC m=+1355.863473401" Nov 24 21:29:44 crc kubenswrapper[4801]: I1124 21:29:44.715605 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f","Type":"ContainerStarted","Data":"2e0ed7471654990a3a0421644a5ed42e59bb45fdd723f7d551e46a55f24f9799"} Nov 24 21:29:44 crc kubenswrapper[4801]: I1124 21:29:44.719763 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896d841d-ba4e-483a-b586-53227f9a9546","Type":"ContainerStarted","Data":"5231483bb752f98864993297fd17466a211c6fadbe038e629a0fbc7bd95e6ed5"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.099511 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bf64d6fd8-98n98"] Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.127098 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dfc99c655-lvmhc"] Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.140298 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7777df5b9c-9zr67"] Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.142092 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.175268 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.175746 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.201242 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7777df5b9c-9zr67"] Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.225204 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-internal-tls-certs\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.225260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-public-tls-certs\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.225337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.232950 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data-custom\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.233102 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnv28\" (UniqueName: \"kubernetes.io/projected/6cd0ff51-4f1c-429e-9f84-e71c784a221a-kube-api-access-rnv28\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.233139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-combined-ca-bundle\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.276760 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.302260 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-66f65cf495-v7bwm"] Nov 24 21:29:45 crc kubenswrapper[4801]: E1124 21:29:45.302770 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" containerName="mariadb-database-create" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.302792 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" containerName="mariadb-database-create" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.303128 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" containerName="mariadb-database-create" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.304078 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.306184 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.310788 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.311550 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.312080 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.335199 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-internal-tls-certs\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.335263 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-public-tls-certs\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.335323 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.335435 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data-custom\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.335485 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnv28\" (UniqueName: \"kubernetes.io/projected/6cd0ff51-4f1c-429e-9f84-e71c784a221a-kube-api-access-rnv28\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.335507 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-combined-ca-bundle\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.352469 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-combined-ca-bundle\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.354290 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-internal-tls-certs\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.372779 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.373162 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.373402 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data-custom\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.380455 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66f65cf495-v7bwm"] Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.384666 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-public-tls-certs\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.420504 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnv28\" (UniqueName: \"kubernetes.io/projected/6cd0ff51-4f1c-429e-9f84-e71c784a221a-kube-api-access-rnv28\") pod \"heat-cfnapi-7777df5b9c-9zr67\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.441459 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-operator-scripts\") pod \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.441826 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-operator-scripts\") pod \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.441944 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mhkc\" (UniqueName: \"kubernetes.io/projected/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-kube-api-access-6mhkc\") pod \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\" (UID: \"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.442016 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-operator-scripts\") pod \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.442494 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtw57\" (UniqueName: \"kubernetes.io/projected/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-kube-api-access-wtw57\") pod \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\" (UID: \"2e075bbc-6e1b-4d86-a6c2-ae3de8941695\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.442624 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrdgh\" (UniqueName: \"kubernetes.io/projected/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-kube-api-access-jrdgh\") pod \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\" (UID: \"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.443041 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-combined-ca-bundle\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.443131 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-public-tls-certs\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.443228 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vt9d\" (UniqueName: \"kubernetes.io/projected/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-kube-api-access-2vt9d\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.443330 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.443567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data-custom\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.443762 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-internal-tls-certs\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.444345 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" (UID: "1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.444828 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" (UID: "f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.449842 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e075bbc-6e1b-4d86-a6c2-ae3de8941695" (UID: "2e075bbc-6e1b-4d86-a6c2-ae3de8941695"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.474270 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-kube-api-access-6mhkc" (OuterVolumeSpecName: "kube-api-access-6mhkc") pod "f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" (UID: "f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d"). InnerVolumeSpecName "kube-api-access-6mhkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.511700 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-kube-api-access-wtw57" (OuterVolumeSpecName: "kube-api-access-wtw57") pod "2e075bbc-6e1b-4d86-a6c2-ae3de8941695" (UID: "2e075bbc-6e1b-4d86-a6c2-ae3de8941695"). InnerVolumeSpecName "kube-api-access-wtw57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.518671 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-kube-api-access-jrdgh" (OuterVolumeSpecName: "kube-api-access-jrdgh") pod "1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" (UID: "1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd"). InnerVolumeSpecName "kube-api-access-jrdgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.555178 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpsq9\" (UniqueName: \"kubernetes.io/projected/80f05c39-1d67-4f70-a64f-fd2d8e160a58-kube-api-access-lpsq9\") pod \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.555420 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f05c39-1d67-4f70-a64f-fd2d8e160a58-operator-scripts\") pod \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\" (UID: \"80f05c39-1d67-4f70-a64f-fd2d8e160a58\") " Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556176 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data-custom\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556236 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-internal-tls-certs\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556317 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-combined-ca-bundle\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556347 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-public-tls-certs\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556413 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vt9d\" (UniqueName: \"kubernetes.io/projected/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-kube-api-access-2vt9d\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556462 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556602 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtw57\" (UniqueName: \"kubernetes.io/projected/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-kube-api-access-wtw57\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556621 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrdgh\" (UniqueName: \"kubernetes.io/projected/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-kube-api-access-jrdgh\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556634 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556646 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556659 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mhkc\" (UniqueName: \"kubernetes.io/projected/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d-kube-api-access-6mhkc\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.556670 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e075bbc-6e1b-4d86-a6c2-ae3de8941695-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.565806 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f05c39-1d67-4f70-a64f-fd2d8e160a58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80f05c39-1d67-4f70-a64f-fd2d8e160a58" (UID: "80f05c39-1d67-4f70-a64f-fd2d8e160a58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.615618 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.616991 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data-custom\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.617128 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f05c39-1d67-4f70-a64f-fd2d8e160a58-kube-api-access-lpsq9" (OuterVolumeSpecName: "kube-api-access-lpsq9") pod "80f05c39-1d67-4f70-a64f-fd2d8e160a58" (UID: "80f05c39-1d67-4f70-a64f-fd2d8e160a58"). InnerVolumeSpecName "kube-api-access-lpsq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.621556 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.628163 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vt9d\" (UniqueName: \"kubernetes.io/projected/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-kube-api-access-2vt9d\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.628867 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-public-tls-certs\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.644335 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-combined-ca-bundle\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.652199 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-internal-tls-certs\") pod \"heat-api-66f65cf495-v7bwm\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.665664 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpsq9\" (UniqueName: \"kubernetes.io/projected/80f05c39-1d67-4f70-a64f-fd2d8e160a58-kube-api-access-lpsq9\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.665697 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f05c39-1d67-4f70-a64f-fd2d8e160a58-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.756683 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-474vq" event={"ID":"f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d","Type":"ContainerDied","Data":"9ff92529f8561cd50837702dd07f055991147967f023612795078e60554b3c55"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.756762 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ff92529f8561cd50837702dd07f055991147967f023612795078e60554b3c55" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.756871 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-474vq" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.771719 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-555fc74868-qx28k" event={"ID":"beb2d417-d295-4780-9946-3860c12149ed","Type":"ContainerStarted","Data":"1d18f7635262d7477fa3a36032cb070be23f046b73df0e69dae89f3593b1d3c4"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.793042 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4fbs" event={"ID":"1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd","Type":"ContainerDied","Data":"fe8cd9df69776439c51ff19a0c005a4894fa0fde870e794667f0116860ae2413"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.793091 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe8cd9df69776439c51ff19a0c005a4894fa0fde870e794667f0116860ae2413" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.793172 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4fbs" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.818185 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c45b8fb99-cff82" event={"ID":"b2db7465-73a5-4b4c-97ac-416d05659962","Type":"ContainerStarted","Data":"632bc7302d4053366f545b1cc0740a74112887b9dccc78bb5c119d82452acef1"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.822411 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cabb-account-create-8scsl" event={"ID":"2e075bbc-6e1b-4d86-a6c2-ae3de8941695","Type":"ContainerDied","Data":"e3c3046e0ed659075d81ef826d48db696896650f09c8a798cd7a8957424529ca"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.822438 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c3046e0ed659075d81ef826d48db696896650f09c8a798cd7a8957424529ca" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.822493 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cabb-account-create-8scsl" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.846149 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qfhn4" event={"ID":"80f05c39-1d67-4f70-a64f-fd2d8e160a58","Type":"ContainerDied","Data":"6f9e5fbf11e50d6ff9f2101f400aa01c76e67a7c568a163a86f920b9a5d9f485"} Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.846197 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9e5fbf11e50d6ff9f2101f400aa01c76e67a7c568a163a86f920b9a5d9f485" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.846276 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qfhn4" Nov 24 21:29:45 crc kubenswrapper[4801]: I1124 21:29:45.942929 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.115209 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.128620 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77d858bcc9-bwkzb" Nov 24 21:29:46 crc kubenswrapper[4801]: E1124 21:29:46.150275 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5de8675_66cf_41ac_9d83_aeeb7a0b6d4d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2d27a8_1962_4a8f_8c2c_b65fa371f0dd.slice/crio-fe8cd9df69776439c51ff19a0c005a4894fa0fde870e794667f0116860ae2413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80f05c39_1d67_4f70_a64f_fd2d8e160a58.slice/crio-6f9e5fbf11e50d6ff9f2101f400aa01c76e67a7c568a163a86f920b9a5d9f485\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2d27a8_1962_4a8f_8c2c_b65fa371f0dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e075bbc_6e1b_4d86_a6c2_ae3de8941695.slice/crio-e3c3046e0ed659075d81ef826d48db696896650f09c8a798cd7a8957424529ca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5de8675_66cf_41ac_9d83_aeeb7a0b6d4d.slice/crio-9ff92529f8561cd50837702dd07f055991147967f023612795078e60554b3c55\": RecentStats: unable to find data in memory cache]" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.862669 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5c15f16-617b-4244-a628-baaf35de4f1f","Type":"ContainerStarted","Data":"8c3c2684119bc81cd6d00ca44512f0a6d26598a83d4a08e084bdb032e8aee7a3"} Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.863113 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.868231 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" event={"ID":"03f744fe-61c0-4179-b7c2-406d3255d02c","Type":"ContainerDied","Data":"ce900df2ed44cb439d409ac1c7eaac4ab6fcca08b84e0128e1c3d1295c8d8012"} Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.868281 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce900df2ed44cb439d409ac1c7eaac4ab6fcca08b84e0128e1c3d1295c8d8012" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.870845 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f95e-account-create-zl2r5" event={"ID":"db1ce2cf-197e-41ee-abe5-652d675d160f","Type":"ContainerDied","Data":"91f8b233221e1ab577e4e3f7f1b4d126101d637975734f6f362033f8931781c7"} Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.870909 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f8b233221e1ab577e4e3f7f1b4d126101d637975734f6f362033f8931781c7" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.903884 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.903853888 podStartE2EDuration="7.903853888s" podCreationTimestamp="2025-11-24 21:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:46.885889592 +0000 UTC m=+1358.968476262" watchObservedRunningTime="2025-11-24 21:29:46.903853888 +0000 UTC m=+1358.986440558" Nov 24 21:29:46 crc kubenswrapper[4801]: I1124 21:29:46.998641 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.010222 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.034116 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db1ce2cf-197e-41ee-abe5-652d675d160f-operator-scripts\") pod \"db1ce2cf-197e-41ee-abe5-652d675d160f\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.035451 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db1ce2cf-197e-41ee-abe5-652d675d160f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db1ce2cf-197e-41ee-abe5-652d675d160f" (UID: "db1ce2cf-197e-41ee-abe5-652d675d160f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.136618 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03f744fe-61c0-4179-b7c2-406d3255d02c-operator-scripts\") pod \"03f744fe-61c0-4179-b7c2-406d3255d02c\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.137379 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr7cj\" (UniqueName: \"kubernetes.io/projected/03f744fe-61c0-4179-b7c2-406d3255d02c-kube-api-access-nr7cj\") pod \"03f744fe-61c0-4179-b7c2-406d3255d02c\" (UID: \"03f744fe-61c0-4179-b7c2-406d3255d02c\") " Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.137422 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8skjm\" (UniqueName: \"kubernetes.io/projected/db1ce2cf-197e-41ee-abe5-652d675d160f-kube-api-access-8skjm\") pod \"db1ce2cf-197e-41ee-abe5-652d675d160f\" (UID: \"db1ce2cf-197e-41ee-abe5-652d675d160f\") " Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.137989 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f744fe-61c0-4179-b7c2-406d3255d02c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03f744fe-61c0-4179-b7c2-406d3255d02c" (UID: "03f744fe-61c0-4179-b7c2-406d3255d02c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.138579 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db1ce2cf-197e-41ee-abe5-652d675d160f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.138599 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03f744fe-61c0-4179-b7c2-406d3255d02c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.143873 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1ce2cf-197e-41ee-abe5-652d675d160f-kube-api-access-8skjm" (OuterVolumeSpecName: "kube-api-access-8skjm") pod "db1ce2cf-197e-41ee-abe5-652d675d160f" (UID: "db1ce2cf-197e-41ee-abe5-652d675d160f"). InnerVolumeSpecName "kube-api-access-8skjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.149558 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f744fe-61c0-4179-b7c2-406d3255d02c-kube-api-access-nr7cj" (OuterVolumeSpecName: "kube-api-access-nr7cj") pod "03f744fe-61c0-4179-b7c2-406d3255d02c" (UID: "03f744fe-61c0-4179-b7c2-406d3255d02c"). InnerVolumeSpecName "kube-api-access-nr7cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.241243 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr7cj\" (UniqueName: \"kubernetes.io/projected/03f744fe-61c0-4179-b7c2-406d3255d02c-kube-api-access-nr7cj\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.241282 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8skjm\" (UniqueName: \"kubernetes.io/projected/db1ce2cf-197e-41ee-abe5-652d675d160f-kube-api-access-8skjm\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.574436 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7777df5b9c-9zr67"] Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.890478 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66f65cf495-v7bwm"] Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.942918 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerStarted","Data":"037e863ed4c4a090c1f51bbc543a3d3f0e4685346236f429b75f083cf4d20a5c"} Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.953280 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c45b8fb99-cff82" event={"ID":"b2db7465-73a5-4b4c-97ac-416d05659962","Type":"ContainerStarted","Data":"6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687"} Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.955200 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.970616 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" event={"ID":"6cd0ff51-4f1c-429e-9f84-e71c784a221a","Type":"ContainerStarted","Data":"fc096e8ca83e0d0cacf04f21a4a7f8b4454fedf673b0d5fda33480367e385c95"} Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.970696 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef6a-account-create-d9b7h" Nov 24 21:29:47 crc kubenswrapper[4801]: I1124 21:29:47.970771 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f95e-account-create-zl2r5" Nov 24 21:29:48 crc kubenswrapper[4801]: I1124 21:29:47.997790 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-c45b8fb99-cff82" podStartSLOduration=6.997763032 podStartE2EDuration="6.997763032s" podCreationTimestamp="2025-11-24 21:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:47.982428896 +0000 UTC m=+1360.065015566" watchObservedRunningTime="2025-11-24 21:29:47.997763032 +0000 UTC m=+1360.080349702" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.019957 4801 generic.go:334] "Generic (PLEG): container finished" podID="beb2d417-d295-4780-9946-3860c12149ed" containerID="d59b8ffff23f17e0596af61cb2dac57b25b78ccbb49494d29208261274535945" exitCode=1 Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.022531 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-555fc74868-qx28k" event={"ID":"beb2d417-d295-4780-9946-3860c12149ed","Type":"ContainerDied","Data":"d59b8ffff23f17e0596af61cb2dac57b25b78ccbb49494d29208261274535945"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.023556 4801 scope.go:117] "RemoveContainer" containerID="d59b8ffff23f17e0596af61cb2dac57b25b78ccbb49494d29208261274535945" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.041721 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerStarted","Data":"b421ab732b55eb1bbd5bae0c06c022d3522f3c8104858ee1d450509f44988f0b"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.043539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66f65cf495-v7bwm" event={"ID":"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a","Type":"ContainerStarted","Data":"ab35bff3f4c7c56045d3139b1462636057b7d38bcfa33f1cf4ec68c7448ec3e0"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.049010 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfc99c655-lvmhc" event={"ID":"791d7adb-060a-423a-9e14-80279995f1ef","Type":"ContainerStarted","Data":"f016e5b70fd8dd56c3eeeaf6feee1ef857b61c8b1840a023f14400d42a68b9fb"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.049190 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5dfc99c655-lvmhc" podUID="791d7adb-060a-423a-9e14-80279995f1ef" containerName="heat-api" containerID="cri-o://f016e5b70fd8dd56c3eeeaf6feee1ef857b61c8b1840a023f14400d42a68b9fb" gracePeriod=60 Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.049564 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.054970 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" event={"ID":"2e02e8a2-d652-4aee-9d0b-6d8823dabb84","Type":"ContainerStarted","Data":"cfa48d2ab9bb2bd17f7df35fb622ad067dd618ac69923fa87f2c02d5f975a532"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.055121 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" podUID="2e02e8a2-d652-4aee-9d0b-6d8823dabb84" containerName="heat-cfnapi" containerID="cri-o://cfa48d2ab9bb2bd17f7df35fb622ad067dd618ac69923fa87f2c02d5f975a532" gracePeriod=60 Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.055557 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.070243 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" event={"ID":"6cd0ff51-4f1c-429e-9f84-e71c784a221a","Type":"ContainerStarted","Data":"bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.074451 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5dfc99c655-lvmhc" podStartSLOduration=10.24280118 podStartE2EDuration="17.074431491s" podCreationTimestamp="2025-11-24 21:29:32 +0000 UTC" firstStartedPulling="2025-11-24 21:29:40.289298507 +0000 UTC m=+1352.371885177" lastFinishedPulling="2025-11-24 21:29:47.120928818 +0000 UTC m=+1359.203515488" observedRunningTime="2025-11-24 21:29:49.069626232 +0000 UTC m=+1361.152212902" watchObservedRunningTime="2025-11-24 21:29:49.074431491 +0000 UTC m=+1361.157018161" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.083038 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896d841d-ba4e-483a-b586-53227f9a9546","Type":"ContainerStarted","Data":"91339a7d337b49806b010c5fd92cd2d372180367012aef27d170daa2c1bb3ffb"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.087417 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" event={"ID":"b0a3564b-3b89-49cc-b2e4-908ef21839d8","Type":"ContainerStarted","Data":"0044596bca97f0f9abadc900b33f4cc0b37539a908ab211cfef7135e96f36463"} Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.087472 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.107996 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" podStartSLOduration=10.127715421 podStartE2EDuration="17.107973191s" podCreationTimestamp="2025-11-24 21:29:32 +0000 UTC" firstStartedPulling="2025-11-24 21:29:40.289152762 +0000 UTC m=+1352.371739422" lastFinishedPulling="2025-11-24 21:29:47.269410522 +0000 UTC m=+1359.351997192" observedRunningTime="2025-11-24 21:29:49.101146689 +0000 UTC m=+1361.183733359" watchObservedRunningTime="2025-11-24 21:29:49.107973191 +0000 UTC m=+1361.190559861" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.180790 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.180758167 podStartE2EDuration="9.180758167s" podCreationTimestamp="2025-11-24 21:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:49.161950805 +0000 UTC m=+1361.244537475" watchObservedRunningTime="2025-11-24 21:29:49.180758167 +0000 UTC m=+1361.263344837" Nov 24 21:29:49 crc kubenswrapper[4801]: I1124 21:29:49.200687 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" podStartSLOduration=4.012309664 podStartE2EDuration="8.200663315s" podCreationTimestamp="2025-11-24 21:29:41 +0000 UTC" firstStartedPulling="2025-11-24 21:29:43.184613631 +0000 UTC m=+1355.267200301" lastFinishedPulling="2025-11-24 21:29:47.372967282 +0000 UTC m=+1359.455553952" observedRunningTime="2025-11-24 21:29:49.187174647 +0000 UTC m=+1361.269761317" watchObservedRunningTime="2025-11-24 21:29:49.200663315 +0000 UTC m=+1361.283250005" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.009211 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m29qg"] Nov 24 21:29:50 crc kubenswrapper[4801]: E1124 21:29:50.010836 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" containerName="mariadb-database-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.010861 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" containerName="mariadb-database-create" Nov 24 21:29:50 crc kubenswrapper[4801]: E1124 21:29:50.010880 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f744fe-61c0-4179-b7c2-406d3255d02c" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.010889 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f744fe-61c0-4179-b7c2-406d3255d02c" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: E1124 21:29:50.010925 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e075bbc-6e1b-4d86-a6c2-ae3de8941695" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.010934 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e075bbc-6e1b-4d86-a6c2-ae3de8941695" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: E1124 21:29:50.010968 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1ce2cf-197e-41ee-abe5-652d675d160f" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.010976 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1ce2cf-197e-41ee-abe5-652d675d160f" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: E1124 21:29:50.011000 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f05c39-1d67-4f70-a64f-fd2d8e160a58" containerName="mariadb-database-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.011007 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f05c39-1d67-4f70-a64f-fd2d8e160a58" containerName="mariadb-database-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.011265 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" containerName="mariadb-database-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.011294 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1ce2cf-197e-41ee-abe5-652d675d160f" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.011309 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f744fe-61c0-4179-b7c2-406d3255d02c" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.011323 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e075bbc-6e1b-4d86-a6c2-ae3de8941695" containerName="mariadb-account-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.011344 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f05c39-1d67-4f70-a64f-fd2d8e160a58" containerName="mariadb-database-create" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.012582 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.024786 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.025128 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.025245 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m76d2" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.031295 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m29qg"] Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.104247 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-scripts\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.110486 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf872\" (UniqueName: \"kubernetes.io/projected/ccceb717-3d31-47bd-a9af-983a5a247278-kube-api-access-gf872\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.110807 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.110877 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-config-data\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.111886 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2f8c36c-e543-4ea8-972a-1c9fe6ba022f","Type":"ContainerStarted","Data":"6efa4650741c6bd691d9fb96b1bc944edf55bee005d7e5258cf7cc96b01f8077"} Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.146753 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66f65cf495-v7bwm" event={"ID":"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a","Type":"ContainerStarted","Data":"46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496"} Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.147613 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.153040 4801 generic.go:334] "Generic (PLEG): container finished" podID="2e02e8a2-d652-4aee-9d0b-6d8823dabb84" containerID="cfa48d2ab9bb2bd17f7df35fb622ad067dd618ac69923fa87f2c02d5f975a532" exitCode=0 Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.153204 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" event={"ID":"2e02e8a2-d652-4aee-9d0b-6d8823dabb84","Type":"ContainerDied","Data":"cfa48d2ab9bb2bd17f7df35fb622ad067dd618ac69923fa87f2c02d5f975a532"} Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.167691 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.167657463 podStartE2EDuration="11.167657463s" podCreationTimestamp="2025-11-24 21:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:50.151829993 +0000 UTC m=+1362.234416673" watchObservedRunningTime="2025-11-24 21:29:50.167657463 +0000 UTC m=+1362.250244133" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.172129 4801 generic.go:334] "Generic (PLEG): container finished" podID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerID="0044596bca97f0f9abadc900b33f4cc0b37539a908ab211cfef7135e96f36463" exitCode=1 Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.172301 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" event={"ID":"b0a3564b-3b89-49cc-b2e4-908ef21839d8","Type":"ContainerDied","Data":"0044596bca97f0f9abadc900b33f4cc0b37539a908ab211cfef7135e96f36463"} Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.173573 4801 scope.go:117] "RemoveContainer" containerID="0044596bca97f0f9abadc900b33f4cc0b37539a908ab211cfef7135e96f36463" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.186179 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-555fc74868-qx28k" event={"ID":"beb2d417-d295-4780-9946-3860c12149ed","Type":"ContainerStarted","Data":"91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b"} Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.186423 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.220068 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-scripts\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.220689 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf872\" (UniqueName: \"kubernetes.io/projected/ccceb717-3d31-47bd-a9af-983a5a247278-kube-api-access-gf872\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.220794 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.220853 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-config-data\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.234341 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-config-data\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.241805 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-66f65cf495-v7bwm" podStartSLOduration=5.241777611 podStartE2EDuration="5.241777611s" podCreationTimestamp="2025-11-24 21:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:50.181017368 +0000 UTC m=+1362.263604038" watchObservedRunningTime="2025-11-24 21:29:50.241777611 +0000 UTC m=+1362.324364281" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.250154 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.253921 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-scripts\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.259044 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf872\" (UniqueName: \"kubernetes.io/projected/ccceb717-3d31-47bd-a9af-983a5a247278-kube-api-access-gf872\") pod \"nova-cell0-conductor-db-sync-m29qg\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.275462 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-555fc74868-qx28k" podStartSLOduration=6.898886427 podStartE2EDuration="9.275435845s" podCreationTimestamp="2025-11-24 21:29:41 +0000 UTC" firstStartedPulling="2025-11-24 21:29:45.00145062 +0000 UTC m=+1357.084037290" lastFinishedPulling="2025-11-24 21:29:47.378000038 +0000 UTC m=+1359.460586708" observedRunningTime="2025-11-24 21:29:50.239044326 +0000 UTC m=+1362.321630997" watchObservedRunningTime="2025-11-24 21:29:50.275435845 +0000 UTC m=+1362.358022515" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.282204 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" podStartSLOduration=5.282190885 podStartE2EDuration="5.282190885s" podCreationTimestamp="2025-11-24 21:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:29:50.271393589 +0000 UTC m=+1362.353980269" watchObservedRunningTime="2025-11-24 21:29:50.282190885 +0000 UTC m=+1362.364777545" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.338953 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:29:50 crc kubenswrapper[4801]: I1124 21:29:50.617066 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.028860 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.212005 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerStarted","Data":"3ac037fa1aa70beb446c8144810c2680bc8a12f8bdc0acbf41c052f2e47057e7"} Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.212639 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.212837 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.212856 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.215227 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjbsb\" (UniqueName: \"kubernetes.io/projected/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-kube-api-access-tjbsb\") pod \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.215278 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-combined-ca-bundle\") pod \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.215318 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data-custom\") pod \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.215382 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data\") pod \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\" (UID: \"2e02e8a2-d652-4aee-9d0b-6d8823dabb84\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.231709 4801 generic.go:334] "Generic (PLEG): container finished" podID="791d7adb-060a-423a-9e14-80279995f1ef" containerID="f016e5b70fd8dd56c3eeeaf6feee1ef857b61c8b1840a023f14400d42a68b9fb" exitCode=0 Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.231829 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfc99c655-lvmhc" event={"ID":"791d7adb-060a-423a-9e14-80279995f1ef","Type":"ContainerDied","Data":"f016e5b70fd8dd56c3eeeaf6feee1ef857b61c8b1840a023f14400d42a68b9fb"} Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.235406 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" event={"ID":"2e02e8a2-d652-4aee-9d0b-6d8823dabb84","Type":"ContainerDied","Data":"2163f07c8aa27820f18093754873f2c3eb0ffee99540482cec9bc2a8719e4124"} Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.235442 4801 scope.go:117] "RemoveContainer" containerID="cfa48d2ab9bb2bd17f7df35fb622ad067dd618ac69923fa87f2c02d5f975a532" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.235611 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf64d6fd8-98n98" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.248995 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.609802769 podStartE2EDuration="12.248969136s" podCreationTimestamp="2025-11-24 21:29:39 +0000 UTC" firstStartedPulling="2025-11-24 21:29:41.91859624 +0000 UTC m=+1354.001182910" lastFinishedPulling="2025-11-24 21:29:50.557762607 +0000 UTC m=+1362.640349277" observedRunningTime="2025-11-24 21:29:51.245077656 +0000 UTC m=+1363.327664326" watchObservedRunningTime="2025-11-24 21:29:51.248969136 +0000 UTC m=+1363.331555796" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.252459 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e02e8a2-d652-4aee-9d0b-6d8823dabb84" (UID: "2e02e8a2-d652-4aee-9d0b-6d8823dabb84"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.256395 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-kube-api-access-tjbsb" (OuterVolumeSpecName: "kube-api-access-tjbsb") pod "2e02e8a2-d652-4aee-9d0b-6d8823dabb84" (UID: "2e02e8a2-d652-4aee-9d0b-6d8823dabb84"). InnerVolumeSpecName "kube-api-access-tjbsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.257571 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" event={"ID":"b0a3564b-3b89-49cc-b2e4-908ef21839d8","Type":"ContainerStarted","Data":"f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783"} Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.257665 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.265298 4801 generic.go:334] "Generic (PLEG): container finished" podID="beb2d417-d295-4780-9946-3860c12149ed" containerID="91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b" exitCode=1 Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.265580 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-555fc74868-qx28k" event={"ID":"beb2d417-d295-4780-9946-3860c12149ed","Type":"ContainerDied","Data":"91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b"} Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.265977 4801 scope.go:117] "RemoveContainer" containerID="91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b" Nov 24 21:29:51 crc kubenswrapper[4801]: E1124 21:29:51.266269 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-555fc74868-qx28k_openstack(beb2d417-d295-4780-9946-3860c12149ed)\"" pod="openstack/heat-api-555fc74868-qx28k" podUID="beb2d417-d295-4780-9946-3860c12149ed" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.283667 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e02e8a2-d652-4aee-9d0b-6d8823dabb84" (UID: "2e02e8a2-d652-4aee-9d0b-6d8823dabb84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.332814 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjbsb\" (UniqueName: \"kubernetes.io/projected/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-kube-api-access-tjbsb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.333276 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.333305 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.338725 4801 scope.go:117] "RemoveContainer" containerID="d59b8ffff23f17e0596af61cb2dac57b25b78ccbb49494d29208261274535945" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.356805 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.389526 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.435234 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data" (OuterVolumeSpecName: "config-data") pod "2e02e8a2-d652-4aee-9d0b-6d8823dabb84" (UID: "2e02e8a2-d652-4aee-9d0b-6d8823dabb84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.442724 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e02e8a2-d652-4aee-9d0b-6d8823dabb84-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.578071 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m29qg"] Nov 24 21:29:51 crc kubenswrapper[4801]: W1124 21:29:51.657414 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccceb717_3d31_47bd_a9af_983a5a247278.slice/crio-8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8 WatchSource:0}: Error finding container 8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8: Status 404 returned error can't find the container with id 8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8 Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.816641 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.846587 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bf64d6fd8-98n98"] Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.861781 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6bf64d6fd8-98n98"] Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.958468 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-combined-ca-bundle\") pod \"791d7adb-060a-423a-9e14-80279995f1ef\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.958581 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data-custom\") pod \"791d7adb-060a-423a-9e14-80279995f1ef\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.958612 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks8h\" (UniqueName: \"kubernetes.io/projected/791d7adb-060a-423a-9e14-80279995f1ef-kube-api-access-dks8h\") pod \"791d7adb-060a-423a-9e14-80279995f1ef\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.958689 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data\") pod \"791d7adb-060a-423a-9e14-80279995f1ef\" (UID: \"791d7adb-060a-423a-9e14-80279995f1ef\") " Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.970620 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "791d7adb-060a-423a-9e14-80279995f1ef" (UID: "791d7adb-060a-423a-9e14-80279995f1ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:51 crc kubenswrapper[4801]: I1124 21:29:51.974005 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791d7adb-060a-423a-9e14-80279995f1ef-kube-api-access-dks8h" (OuterVolumeSpecName: "kube-api-access-dks8h") pod "791d7adb-060a-423a-9e14-80279995f1ef" (UID: "791d7adb-060a-423a-9e14-80279995f1ef"). InnerVolumeSpecName "kube-api-access-dks8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.054496 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "791d7adb-060a-423a-9e14-80279995f1ef" (UID: "791d7adb-060a-423a-9e14-80279995f1ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.060779 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data" (OuterVolumeSpecName: "config-data") pod "791d7adb-060a-423a-9e14-80279995f1ef" (UID: "791d7adb-060a-423a-9e14-80279995f1ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.063086 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.063113 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks8h\" (UniqueName: \"kubernetes.io/projected/791d7adb-060a-423a-9e14-80279995f1ef-kube-api-access-dks8h\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.063127 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.063137 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d7adb-060a-423a-9e14-80279995f1ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.283876 4801 generic.go:334] "Generic (PLEG): container finished" podID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerID="f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783" exitCode=1 Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.284114 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" event={"ID":"b0a3564b-3b89-49cc-b2e4-908ef21839d8","Type":"ContainerDied","Data":"f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783"} Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.284453 4801 scope.go:117] "RemoveContainer" containerID="0044596bca97f0f9abadc900b33f4cc0b37539a908ab211cfef7135e96f36463" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.285455 4801 scope.go:117] "RemoveContainer" containerID="f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783" Nov 24 21:29:52 crc kubenswrapper[4801]: E1124 21:29:52.285854 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-fd8776ddc-8dg24_openstack(b0a3564b-3b89-49cc-b2e4-908ef21839d8)\"" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.294228 4801 scope.go:117] "RemoveContainer" containerID="91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b" Nov 24 21:29:52 crc kubenswrapper[4801]: E1124 21:29:52.294576 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-555fc74868-qx28k_openstack(beb2d417-d295-4780-9946-3860c12149ed)\"" pod="openstack/heat-api-555fc74868-qx28k" podUID="beb2d417-d295-4780-9946-3860c12149ed" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.298749 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfc99c655-lvmhc" event={"ID":"791d7adb-060a-423a-9e14-80279995f1ef","Type":"ContainerDied","Data":"b198ec1f511d8c0429e705b5b809d2af6c7c06c9c882275bb6fe0c970b79d7bc"} Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.298841 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfc99c655-lvmhc" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.318128 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m29qg" event={"ID":"ccceb717-3d31-47bd-a9af-983a5a247278","Type":"ContainerStarted","Data":"8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8"} Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.318668 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.318840 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.353806 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.397538 4801 scope.go:117] "RemoveContainer" containerID="f016e5b70fd8dd56c3eeeaf6feee1ef857b61c8b1840a023f14400d42a68b9fb" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.397790 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dfc99c655-lvmhc"] Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.498385 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5dfc99c655-lvmhc"] Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.528852 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.575198 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.694006 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e02e8a2-d652-4aee-9d0b-6d8823dabb84" path="/var/lib/kubelet/pods/2e02e8a2-d652-4aee-9d0b-6d8823dabb84/volumes" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.697811 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791d7adb-060a-423a-9e14-80279995f1ef" path="/var/lib/kubelet/pods/791d7adb-060a-423a-9e14-80279995f1ef/volumes" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.788154 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.884413 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-74j6j"] Nov 24 21:29:52 crc kubenswrapper[4801]: I1124 21:29:52.886161 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerName="dnsmasq-dns" containerID="cri-o://92b8522dd835d96af406b9970e2045a6f90da7d8baa632db28272d13a2b7c9a7" gracePeriod=10 Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.339480 4801 generic.go:334] "Generic (PLEG): container finished" podID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerID="92b8522dd835d96af406b9970e2045a6f90da7d8baa632db28272d13a2b7c9a7" exitCode=0 Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.339572 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" event={"ID":"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed","Type":"ContainerDied","Data":"92b8522dd835d96af406b9970e2045a6f90da7d8baa632db28272d13a2b7c9a7"} Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.350842 4801 scope.go:117] "RemoveContainer" containerID="91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b" Nov 24 21:29:53 crc kubenswrapper[4801]: E1124 21:29:53.351571 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-555fc74868-qx28k_openstack(beb2d417-d295-4780-9946-3860c12149ed)\"" pod="openstack/heat-api-555fc74868-qx28k" podUID="beb2d417-d295-4780-9946-3860c12149ed" Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.352947 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-central-agent" containerID="cri-o://6cc23876ac1f106586c463f71eba883afd5b1e071376c0465754e232470699be" gracePeriod=30 Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.353161 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="proxy-httpd" containerID="cri-o://3ac037fa1aa70beb446c8144810c2680bc8a12f8bdc0acbf41c052f2e47057e7" gracePeriod=30 Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.353216 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="sg-core" containerID="cri-o://b421ab732b55eb1bbd5bae0c06c022d3522f3c8104858ee1d450509f44988f0b" gracePeriod=30 Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.354050 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-notification-agent" containerID="cri-o://037e863ed4c4a090c1f51bbc543a3d3f0e4685346236f429b75f083cf4d20a5c" gracePeriod=30 Nov 24 21:29:53 crc kubenswrapper[4801]: I1124 21:29:53.355121 4801 scope.go:117] "RemoveContainer" containerID="f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783" Nov 24 21:29:53 crc kubenswrapper[4801]: E1124 21:29:53.355459 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-fd8776ddc-8dg24_openstack(b0a3564b-3b89-49cc-b2e4-908ef21839d8)\"" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.103641 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.135954 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvh4\" (UniqueName: \"kubernetes.io/projected/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-kube-api-access-skvh4\") pod \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.136099 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-nb\") pod \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.136214 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-sb\") pod \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.136277 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-svc\") pod \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.136332 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-swift-storage-0\") pod \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.167309 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-kube-api-access-skvh4" (OuterVolumeSpecName: "kube-api-access-skvh4") pod "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" (UID: "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed"). InnerVolumeSpecName "kube-api-access-skvh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.241889 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-config\") pod \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\" (UID: \"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.243167 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvh4\" (UniqueName: \"kubernetes.io/projected/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-kube-api-access-skvh4\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.265474 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" (UID: "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.287065 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" (UID: "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.295589 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" (UID: "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.311076 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" (UID: "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.337185 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-config" (OuterVolumeSpecName: "config") pod "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" (UID: "0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.345876 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.345920 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.345977 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.346001 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.346020 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374130 4801 generic.go:334] "Generic (PLEG): container finished" podID="9540639f-3226-4ed7-b540-1825b4d9b279" containerID="3ac037fa1aa70beb446c8144810c2680bc8a12f8bdc0acbf41c052f2e47057e7" exitCode=0 Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374171 4801 generic.go:334] "Generic (PLEG): container finished" podID="9540639f-3226-4ed7-b540-1825b4d9b279" containerID="b421ab732b55eb1bbd5bae0c06c022d3522f3c8104858ee1d450509f44988f0b" exitCode=2 Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374184 4801 generic.go:334] "Generic (PLEG): container finished" podID="9540639f-3226-4ed7-b540-1825b4d9b279" containerID="037e863ed4c4a090c1f51bbc543a3d3f0e4685346236f429b75f083cf4d20a5c" exitCode=0 Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374194 4801 generic.go:334] "Generic (PLEG): container finished" podID="9540639f-3226-4ed7-b540-1825b4d9b279" containerID="6cc23876ac1f106586c463f71eba883afd5b1e071376c0465754e232470699be" exitCode=0 Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374258 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerDied","Data":"3ac037fa1aa70beb446c8144810c2680bc8a12f8bdc0acbf41c052f2e47057e7"} Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374293 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerDied","Data":"b421ab732b55eb1bbd5bae0c06c022d3522f3c8104858ee1d450509f44988f0b"} Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374304 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerDied","Data":"037e863ed4c4a090c1f51bbc543a3d3f0e4685346236f429b75f083cf4d20a5c"} Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.374314 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerDied","Data":"6cc23876ac1f106586c463f71eba883afd5b1e071376c0465754e232470699be"} Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.381163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" event={"ID":"0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed","Type":"ContainerDied","Data":"426b1c6cb6efacc5c62137e944d861c2d0fd6fbade02d238ed3457ab78d9eced"} Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.381221 4801 scope.go:117] "RemoveContainer" containerID="92b8522dd835d96af406b9970e2045a6f90da7d8baa632db28272d13a2b7c9a7" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.381502 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-74j6j" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.408081 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.482174 4801 scope.go:117] "RemoveContainer" containerID="3a42bf0730670b80a21f950f2b3de25fd72e7557049b93345609115e13c26067" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.505349 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-74j6j"] Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.522403 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-74j6j"] Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.561534 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-log-httpd\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.561669 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-combined-ca-bundle\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.561706 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-scripts\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.561751 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdn4t\" (UniqueName: \"kubernetes.io/projected/9540639f-3226-4ed7-b540-1825b4d9b279-kube-api-access-pdn4t\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.561778 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-run-httpd\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.561922 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-sg-core-conf-yaml\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.562023 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-config-data\") pod \"9540639f-3226-4ed7-b540-1825b4d9b279\" (UID: \"9540639f-3226-4ed7-b540-1825b4d9b279\") " Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.563123 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.563242 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.572150 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9540639f-3226-4ed7-b540-1825b4d9b279-kube-api-access-pdn4t" (OuterVolumeSpecName: "kube-api-access-pdn4t") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "kube-api-access-pdn4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.573312 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-scripts" (OuterVolumeSpecName: "scripts") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.606086 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.666037 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.666074 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.666085 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.666096 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdn4t\" (UniqueName: \"kubernetes.io/projected/9540639f-3226-4ed7-b540-1825b4d9b279-kube-api-access-pdn4t\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.666108 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9540639f-3226-4ed7-b540-1825b4d9b279-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.720653 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" path="/var/lib/kubelet/pods/0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed/volumes" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.744596 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-config-data" (OuterVolumeSpecName: "config-data") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.746568 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9540639f-3226-4ed7-b540-1825b4d9b279" (UID: "9540639f-3226-4ed7-b540-1825b4d9b279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.768194 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:54 crc kubenswrapper[4801]: I1124 21:29:54.768489 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9540639f-3226-4ed7-b540-1825b4d9b279-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.409005 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9540639f-3226-4ed7-b540-1825b4d9b279","Type":"ContainerDied","Data":"1d7c2cc07aac5171c2ec36448f983c10454439da4aa0035cac204c9e5390190b"} Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.409081 4801 scope.go:117] "RemoveContainer" containerID="3ac037fa1aa70beb446c8144810c2680bc8a12f8bdc0acbf41c052f2e47057e7" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.409260 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.464901 4801 scope.go:117] "RemoveContainer" containerID="b421ab732b55eb1bbd5bae0c06c022d3522f3c8104858ee1d450509f44988f0b" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.482854 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.494332 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.608946 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.629961 4801 scope.go:117] "RemoveContainer" containerID="037e863ed4c4a090c1f51bbc543a3d3f0e4685346236f429b75f083cf4d20a5c" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.634837 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-notification-agent" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.634894 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-notification-agent" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635012 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerName="init" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635025 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerName="init" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635050 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerName="dnsmasq-dns" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635058 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerName="dnsmasq-dns" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635075 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="proxy-httpd" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635082 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="proxy-httpd" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635099 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-central-agent" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635106 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-central-agent" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635126 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="sg-core" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635133 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="sg-core" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635153 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791d7adb-060a-423a-9e14-80279995f1ef" containerName="heat-api" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635160 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="791d7adb-060a-423a-9e14-80279995f1ef" containerName="heat-api" Nov 24 21:29:55 crc kubenswrapper[4801]: E1124 21:29:55.635199 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e02e8a2-d652-4aee-9d0b-6d8823dabb84" containerName="heat-cfnapi" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.635206 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e02e8a2-d652-4aee-9d0b-6d8823dabb84" containerName="heat-cfnapi" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636260 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="791d7adb-060a-423a-9e14-80279995f1ef" containerName="heat-api" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636336 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d9cc6-b0c6-4faf-9ebe-18c5857f89ed" containerName="dnsmasq-dns" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636349 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-notification-agent" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636419 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="ceilometer-central-agent" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636436 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="sg-core" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636460 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e02e8a2-d652-4aee-9d0b-6d8823dabb84" containerName="heat-cfnapi" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.636479 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" containerName="proxy-httpd" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.664047 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.667796 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.671712 4801 scope.go:117] "RemoveContainer" containerID="6cc23876ac1f106586c463f71eba883afd5b1e071376c0465754e232470699be" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.672898 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.673589 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.758117 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.758822 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-config-data\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.759090 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58tl\" (UniqueName: \"kubernetes.io/projected/cab5894e-5c00-4eaa-a829-9adcef352f4a-kube-api-access-s58tl\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.759169 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-run-httpd\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.759199 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-scripts\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.759302 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-log-httpd\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.759347 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862290 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-run-httpd\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862386 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-scripts\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862438 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-log-httpd\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862470 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862532 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862571 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-config-data\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.862659 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s58tl\" (UniqueName: \"kubernetes.io/projected/cab5894e-5c00-4eaa-a829-9adcef352f4a-kube-api-access-s58tl\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.863061 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-run-httpd\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.864131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-log-httpd\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.871426 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-scripts\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.871921 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.893098 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-config-data\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.904102 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:55 crc kubenswrapper[4801]: I1124 21:29:55.923622 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s58tl\" (UniqueName: \"kubernetes.io/projected/cab5894e-5c00-4eaa-a829-9adcef352f4a-kube-api-access-s58tl\") pod \"ceilometer-0\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " pod="openstack/ceilometer-0" Nov 24 21:29:56 crc kubenswrapper[4801]: I1124 21:29:56.011499 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:29:56 crc kubenswrapper[4801]: I1124 21:29:56.623095 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:56 crc kubenswrapper[4801]: I1124 21:29:56.692807 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9540639f-3226-4ed7-b540-1825b4d9b279" path="/var/lib/kubelet/pods/9540639f-3226-4ed7-b540-1825b4d9b279/volumes" Nov 24 21:29:56 crc kubenswrapper[4801]: I1124 21:29:56.845630 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="f5c15f16-617b-4244-a628-baaf35de4f1f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.222:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:29:57 crc kubenswrapper[4801]: I1124 21:29:57.005149 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:57 crc kubenswrapper[4801]: I1124 21:29:57.006265 4801 scope.go:117] "RemoveContainer" containerID="f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783" Nov 24 21:29:57 crc kubenswrapper[4801]: E1124 21:29:57.006562 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-fd8776ddc-8dg24_openstack(b0a3564b-3b89-49cc-b2e4-908ef21839d8)\"" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" Nov 24 21:29:57 crc kubenswrapper[4801]: I1124 21:29:57.319672 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:57 crc kubenswrapper[4801]: I1124 21:29:57.319838 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:29:57 crc kubenswrapper[4801]: I1124 21:29:57.443771 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:29:57 crc kubenswrapper[4801]: I1124 21:29:57.470806 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerStarted","Data":"c83984475e055b32fa1ee390c5f429b0d818d9f0f8ffd2f96aee8472d60763a9"} Nov 24 21:29:58 crc kubenswrapper[4801]: I1124 21:29:58.453079 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:29:58 crc kubenswrapper[4801]: I1124 21:29:58.529191 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fd8776ddc-8dg24"] Nov 24 21:29:58 crc kubenswrapper[4801]: I1124 21:29:58.548760 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerStarted","Data":"e43875aff6d0d4421bbea9ddb5a3718a3b9f98ed56c756aa3aa9cf4efe433f9c"} Nov 24 21:29:58 crc kubenswrapper[4801]: I1124 21:29:58.948820 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.017698 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.327502 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-555fc74868-qx28k"] Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.362066 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.408586 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.564443 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data\") pod \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.565134 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data-custom\") pod \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.565529 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s4bq\" (UniqueName: \"kubernetes.io/projected/b0a3564b-3b89-49cc-b2e4-908ef21839d8-kube-api-access-6s4bq\") pod \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.566307 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-combined-ca-bundle\") pod \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\" (UID: \"b0a3564b-3b89-49cc-b2e4-908ef21839d8\") " Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.579690 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b0a3564b-3b89-49cc-b2e4-908ef21839d8" (UID: "b0a3564b-3b89-49cc-b2e4-908ef21839d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.580394 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a3564b-3b89-49cc-b2e4-908ef21839d8-kube-api-access-6s4bq" (OuterVolumeSpecName: "kube-api-access-6s4bq") pod "b0a3564b-3b89-49cc-b2e4-908ef21839d8" (UID: "b0a3564b-3b89-49cc-b2e4-908ef21839d8"). InnerVolumeSpecName "kube-api-access-6s4bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.580492 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerStarted","Data":"04777bbd07611a03066280666de23330435a49d4ec38472f924de44fad095aaa"} Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.595377 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.596235 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd8776ddc-8dg24" event={"ID":"b0a3564b-3b89-49cc-b2e4-908ef21839d8","Type":"ContainerDied","Data":"55576c04feb443b074482100baf0690a27bc3b31469512e46ef1d35f7031e795"} Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.596357 4801 scope.go:117] "RemoveContainer" containerID="f7bb3d8bdba7003bd3deae468ca63942285693e25ff82158142d3a623ec55783" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.668390 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0a3564b-3b89-49cc-b2e4-908ef21839d8" (UID: "b0a3564b-3b89-49cc-b2e4-908ef21839d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.671378 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s4bq\" (UniqueName: \"kubernetes.io/projected/b0a3564b-3b89-49cc-b2e4-908ef21839d8-kube-api-access-6s4bq\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.671568 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.671643 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.780156 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data" (OuterVolumeSpecName: "config-data") pod "b0a3564b-3b89-49cc-b2e4-908ef21839d8" (UID: "b0a3564b-3b89-49cc-b2e4-908ef21839d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.780430 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a3564b-3b89-49cc-b2e4-908ef21839d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.982877 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.983265 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:29:59 crc kubenswrapper[4801]: I1124 21:29:59.987514 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.001932 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fd8776ddc-8dg24"] Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.020847 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-fd8776ddc-8dg24"] Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.053224 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.082052 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.087126 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-combined-ca-bundle\") pod \"beb2d417-d295-4780-9946-3860c12149ed\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.087208 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data-custom\") pod \"beb2d417-d295-4780-9946-3860c12149ed\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.087517 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbnrp\" (UniqueName: \"kubernetes.io/projected/beb2d417-d295-4780-9946-3860c12149ed-kube-api-access-fbnrp\") pod \"beb2d417-d295-4780-9946-3860c12149ed\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.087618 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data\") pod \"beb2d417-d295-4780-9946-3860c12149ed\" (UID: \"beb2d417-d295-4780-9946-3860c12149ed\") " Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.100593 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "beb2d417-d295-4780-9946-3860c12149ed" (UID: "beb2d417-d295-4780-9946-3860c12149ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.113893 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb2d417-d295-4780-9946-3860c12149ed-kube-api-access-fbnrp" (OuterVolumeSpecName: "kube-api-access-fbnrp") pod "beb2d417-d295-4780-9946-3860c12149ed" (UID: "beb2d417-d295-4780-9946-3860c12149ed"). InnerVolumeSpecName "kube-api-access-fbnrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.142427 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beb2d417-d295-4780-9946-3860c12149ed" (UID: "beb2d417-d295-4780-9946-3860c12149ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.158520 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr"] Nov 24 21:30:00 crc kubenswrapper[4801]: E1124 21:30:00.159156 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb2d417-d295-4780-9946-3860c12149ed" containerName="heat-api" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159179 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb2d417-d295-4780-9946-3860c12149ed" containerName="heat-api" Nov 24 21:30:00 crc kubenswrapper[4801]: E1124 21:30:00.159209 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb2d417-d295-4780-9946-3860c12149ed" containerName="heat-api" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159217 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb2d417-d295-4780-9946-3860c12149ed" containerName="heat-api" Nov 24 21:30:00 crc kubenswrapper[4801]: E1124 21:30:00.159255 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerName="heat-cfnapi" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159262 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerName="heat-cfnapi" Nov 24 21:30:00 crc kubenswrapper[4801]: E1124 21:30:00.159278 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerName="heat-cfnapi" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159284 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerName="heat-cfnapi" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159560 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb2d417-d295-4780-9946-3860c12149ed" containerName="heat-api" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159583 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb2d417-d295-4780-9946-3860c12149ed" containerName="heat-api" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.159601 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerName="heat-cfnapi" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.161141 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.167794 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.168154 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.190580 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.190614 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbnrp\" (UniqueName: \"kubernetes.io/projected/beb2d417-d295-4780-9946-3860c12149ed-kube-api-access-fbnrp\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.190629 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.207400 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr"] Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.231249 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data" (OuterVolumeSpecName: "config-data") pod "beb2d417-d295-4780-9946-3860c12149ed" (UID: "beb2d417-d295-4780-9946-3860c12149ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:00 crc kubenswrapper[4801]: E1124 21:30:00.289149 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0a3564b_3b89_49cc_b2e4_908ef21839d8.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.293497 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxj7h\" (UniqueName: \"kubernetes.io/projected/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-kube-api-access-sxj7h\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.294308 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-config-volume\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.294455 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-secret-volume\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.294561 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb2d417-d295-4780-9946-3860c12149ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.400243 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-config-volume\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.400717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-secret-volume\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.401207 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxj7h\" (UniqueName: \"kubernetes.io/projected/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-kube-api-access-sxj7h\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.401749 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-config-volume\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.406976 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-secret-volume\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.421266 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxj7h\" (UniqueName: \"kubernetes.io/projected/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-kube-api-access-sxj7h\") pod \"collect-profiles-29400330-hn5rr\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.492432 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.625687 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-555fc74868-qx28k" event={"ID":"beb2d417-d295-4780-9946-3860c12149ed","Type":"ContainerDied","Data":"1d18f7635262d7477fa3a36032cb070be23f046b73df0e69dae89f3593b1d3c4"} Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.626128 4801 scope.go:117] "RemoveContainer" containerID="91ea36c66fe2902f3dc16b041b75b856a2772f62ce5376ef05be52f88f41b40b" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.626242 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-555fc74868-qx28k" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.659533 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerStarted","Data":"6c39a1fec9223210ce084f5e118b913213b323a0f2fe8fae3c6283d9e90aec99"} Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.664287 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.664344 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.714183 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" path="/var/lib/kubelet/pods/b0a3564b-3b89-49cc-b2e4-908ef21839d8/volumes" Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.899750 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-555fc74868-qx28k"] Nov 24 21:30:00 crc kubenswrapper[4801]: I1124 21:30:00.915301 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-555fc74868-qx28k"] Nov 24 21:30:01 crc kubenswrapper[4801]: I1124 21:30:01.298699 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr"] Nov 24 21:30:01 crc kubenswrapper[4801]: W1124 21:30:01.301588 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357f85b7_cb19_4fe5_a2ca_009de3fdf2bc.slice/crio-10293fdb1736244917567887b2270ef0016259973a87633beecd270e20837962 WatchSource:0}: Error finding container 10293fdb1736244917567887b2270ef0016259973a87633beecd270e20837962: Status 404 returned error can't find the container with id 10293fdb1736244917567887b2270ef0016259973a87633beecd270e20837962 Nov 24 21:30:01 crc kubenswrapper[4801]: I1124 21:30:01.685815 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" event={"ID":"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc","Type":"ContainerStarted","Data":"5d7c9bb7c45997a3d3c6a1b0c418b0533992edbe2ccf55e6029143a9766a39ee"} Nov 24 21:30:01 crc kubenswrapper[4801]: I1124 21:30:01.686261 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" event={"ID":"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc","Type":"ContainerStarted","Data":"10293fdb1736244917567887b2270ef0016259973a87633beecd270e20837962"} Nov 24 21:30:01 crc kubenswrapper[4801]: I1124 21:30:01.710704 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" podStartSLOduration=1.710676211 podStartE2EDuration="1.710676211s" podCreationTimestamp="2025-11-24 21:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:30:01.699170124 +0000 UTC m=+1373.781756804" watchObservedRunningTime="2025-11-24 21:30:01.710676211 +0000 UTC m=+1373.793262881" Nov 24 21:30:02 crc kubenswrapper[4801]: I1124 21:30:02.131135 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:30:02 crc kubenswrapper[4801]: I1124 21:30:02.191390 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5848585774-8jkmz"] Nov 24 21:30:02 crc kubenswrapper[4801]: I1124 21:30:02.192094 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5848585774-8jkmz" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerName="heat-engine" containerID="cri-o://56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" gracePeriod=60 Nov 24 21:30:02 crc kubenswrapper[4801]: E1124 21:30:02.498483 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:30:02 crc kubenswrapper[4801]: E1124 21:30:02.509658 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:30:02 crc kubenswrapper[4801]: E1124 21:30:02.514469 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:30:02 crc kubenswrapper[4801]: E1124 21:30:02.514528 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5848585774-8jkmz" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerName="heat-engine" Nov 24 21:30:02 crc kubenswrapper[4801]: I1124 21:30:02.698416 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:30:02 crc kubenswrapper[4801]: I1124 21:30:02.698466 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:30:02 crc kubenswrapper[4801]: I1124 21:30:02.711021 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb2d417-d295-4780-9946-3860c12149ed" path="/var/lib/kubelet/pods/beb2d417-d295-4780-9946-3860c12149ed/volumes" Nov 24 21:30:03 crc kubenswrapper[4801]: I1124 21:30:03.722142 4801 generic.go:334] "Generic (PLEG): container finished" podID="357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" containerID="5d7c9bb7c45997a3d3c6a1b0c418b0533992edbe2ccf55e6029143a9766a39ee" exitCode=0 Nov 24 21:30:03 crc kubenswrapper[4801]: I1124 21:30:03.722341 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" event={"ID":"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc","Type":"ContainerDied","Data":"5d7c9bb7c45997a3d3c6a1b0c418b0533992edbe2ccf55e6029143a9766a39ee"} Nov 24 21:30:04 crc kubenswrapper[4801]: I1124 21:30:04.842551 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="f5c15f16-617b-4244-a628-baaf35de4f1f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.222:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:30:05 crc kubenswrapper[4801]: I1124 21:30:05.843538 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6996d87ddd-ph957" podUID="4e906613-b24c-4ee1-8b87-b8a7d7d20871" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.195:9696/\": dial tcp 10.217.0.195:9696: i/o timeout (Client.Timeout exceeded while awaiting headers)" Nov 24 21:30:08 crc kubenswrapper[4801]: I1124 21:30:08.224986 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:30:08 crc kubenswrapper[4801]: I1124 21:30:08.225697 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:30:08 crc kubenswrapper[4801]: I1124 21:30:08.268304 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:30:09 crc kubenswrapper[4801]: I1124 21:30:09.138483 4801 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod32333c22-5214-46fe-a77e-9268c3fda5a4"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod32333c22-5214-46fe-a77e-9268c3fda5a4] : Timed out while waiting for systemd to remove kubepods-besteffort-pod32333c22_5214_46fe_a77e_9268c3fda5a4.slice" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.451131 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.572991 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-secret-volume\") pod \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.573208 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-config-volume\") pod \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.573340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxj7h\" (UniqueName: \"kubernetes.io/projected/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-kube-api-access-sxj7h\") pod \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\" (UID: \"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc\") " Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.574499 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" (UID: "357f85b7-cb19-4fe5-a2ca-009de3fdf2bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.575501 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.581916 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-kube-api-access-sxj7h" (OuterVolumeSpecName: "kube-api-access-sxj7h") pod "357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" (UID: "357f85b7-cb19-4fe5-a2ca-009de3fdf2bc"). InnerVolumeSpecName "kube-api-access-sxj7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.606386 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" (UID: "357f85b7-cb19-4fe5-a2ca-009de3fdf2bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.680820 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxj7h\" (UniqueName: \"kubernetes.io/projected/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-kube-api-access-sxj7h\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.680861 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.863921 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" event={"ID":"357f85b7-cb19-4fe5-a2ca-009de3fdf2bc","Type":"ContainerDied","Data":"10293fdb1736244917567887b2270ef0016259973a87633beecd270e20837962"} Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.864334 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10293fdb1736244917567887b2270ef0016259973a87633beecd270e20837962" Nov 24 21:30:10 crc kubenswrapper[4801]: I1124 21:30:10.864191 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr" Nov 24 21:30:11 crc kubenswrapper[4801]: E1124 21:30:11.249944 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Nov 24 21:30:11 crc kubenswrapper[4801]: E1124 21:30:11.250163 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-m29qg_openstack(ccceb717-3d31-47bd-a9af-983a5a247278): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:30:11 crc kubenswrapper[4801]: E1124 21:30:11.252322 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-m29qg" podUID="ccceb717-3d31-47bd-a9af-983a5a247278" Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.886502 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerStarted","Data":"ce9494a0c546be01945a4e4c01f7b03458a1ef7c0ef1687713ae3b191b612c03"} Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.887740 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.887264 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="proxy-httpd" containerID="cri-o://ce9494a0c546be01945a4e4c01f7b03458a1ef7c0ef1687713ae3b191b612c03" gracePeriod=30 Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.887286 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="sg-core" containerID="cri-o://6c39a1fec9223210ce084f5e118b913213b323a0f2fe8fae3c6283d9e90aec99" gracePeriod=30 Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.887311 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-notification-agent" containerID="cri-o://04777bbd07611a03066280666de23330435a49d4ec38472f924de44fad095aaa" gracePeriod=30 Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.886669 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-central-agent" containerID="cri-o://e43875aff6d0d4421bbea9ddb5a3718a3b9f98ed56c756aa3aa9cf4efe433f9c" gracePeriod=30 Nov 24 21:30:11 crc kubenswrapper[4801]: E1124 21:30:11.893688 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-m29qg" podUID="ccceb717-3d31-47bd-a9af-983a5a247278" Nov 24 21:30:11 crc kubenswrapper[4801]: I1124 21:30:11.932382 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.273209111 podStartE2EDuration="16.932345015s" podCreationTimestamp="2025-11-24 21:29:55 +0000 UTC" firstStartedPulling="2025-11-24 21:29:56.626066118 +0000 UTC m=+1368.708652788" lastFinishedPulling="2025-11-24 21:30:11.285202022 +0000 UTC m=+1383.367788692" observedRunningTime="2025-11-24 21:30:11.923094108 +0000 UTC m=+1384.005680778" watchObservedRunningTime="2025-11-24 21:30:11.932345015 +0000 UTC m=+1384.014931685" Nov 24 21:30:12 crc kubenswrapper[4801]: E1124 21:30:12.492139 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1 is running failed: container process not found" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:30:12 crc kubenswrapper[4801]: E1124 21:30:12.494360 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1 is running failed: container process not found" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:30:12 crc kubenswrapper[4801]: E1124 21:30:12.495177 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1 is running failed: container process not found" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:30:12 crc kubenswrapper[4801]: E1124 21:30:12.495256 4801 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-5848585774-8jkmz" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerName="heat-engine" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.798464 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.861716 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggjw6\" (UniqueName: \"kubernetes.io/projected/e0c299ae-e6d9-4828-abeb-0b0627b487bc-kube-api-access-ggjw6\") pod \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.862329 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data-custom\") pod \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.862639 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data\") pod \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.862881 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-combined-ca-bundle\") pod \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\" (UID: \"e0c299ae-e6d9-4828-abeb-0b0627b487bc\") " Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.870602 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c299ae-e6d9-4828-abeb-0b0627b487bc-kube-api-access-ggjw6" (OuterVolumeSpecName: "kube-api-access-ggjw6") pod "e0c299ae-e6d9-4828-abeb-0b0627b487bc" (UID: "e0c299ae-e6d9-4828-abeb-0b0627b487bc"). InnerVolumeSpecName "kube-api-access-ggjw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.872519 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e0c299ae-e6d9-4828-abeb-0b0627b487bc" (UID: "e0c299ae-e6d9-4828-abeb-0b0627b487bc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.907594 4801 generic.go:334] "Generic (PLEG): container finished" podID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerID="6c39a1fec9223210ce084f5e118b913213b323a0f2fe8fae3c6283d9e90aec99" exitCode=2 Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.907631 4801 generic.go:334] "Generic (PLEG): container finished" podID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerID="04777bbd07611a03066280666de23330435a49d4ec38472f924de44fad095aaa" exitCode=0 Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.907720 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerDied","Data":"6c39a1fec9223210ce084f5e118b913213b323a0f2fe8fae3c6283d9e90aec99"} Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.907757 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerDied","Data":"04777bbd07611a03066280666de23330435a49d4ec38472f924de44fad095aaa"} Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.921196 4801 generic.go:334] "Generic (PLEG): container finished" podID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" exitCode=0 Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.921702 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5848585774-8jkmz" event={"ID":"e0c299ae-e6d9-4828-abeb-0b0627b487bc","Type":"ContainerDied","Data":"56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1"} Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.921832 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5848585774-8jkmz" event={"ID":"e0c299ae-e6d9-4828-abeb-0b0627b487bc","Type":"ContainerDied","Data":"3db99259ce9a678d96bd02665dfb9be1442b2e7815858a2b1521a0d505a3099f"} Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.921962 4801 scope.go:117] "RemoveContainer" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.922445 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5848585774-8jkmz" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.960039 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data" (OuterVolumeSpecName: "config-data") pod "e0c299ae-e6d9-4828-abeb-0b0627b487bc" (UID: "e0c299ae-e6d9-4828-abeb-0b0627b487bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.967263 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.967301 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.967311 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggjw6\" (UniqueName: \"kubernetes.io/projected/e0c299ae-e6d9-4828-abeb-0b0627b487bc-kube-api-access-ggjw6\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.997188 4801 scope.go:117] "RemoveContainer" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" Nov 24 21:30:12 crc kubenswrapper[4801]: E1124 21:30:12.997747 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1\": container with ID starting with 56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1 not found: ID does not exist" containerID="56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1" Nov 24 21:30:12 crc kubenswrapper[4801]: I1124 21:30:12.997787 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1"} err="failed to get container status \"56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1\": rpc error: code = NotFound desc = could not find container \"56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1\": container with ID starting with 56fe77ac5388159bfeea48f9f075bd8cdc7e8036fd84b6781b9efb1fd8894af1 not found: ID does not exist" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.001397 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0c299ae-e6d9-4828-abeb-0b0627b487bc" (UID: "e0c299ae-e6d9-4828-abeb-0b0627b487bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.071836 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c299ae-e6d9-4828-abeb-0b0627b487bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.261427 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5848585774-8jkmz"] Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.273166 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5848585774-8jkmz"] Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.582678 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-24gx6"] Nov 24 21:30:13 crc kubenswrapper[4801]: E1124 21:30:13.584125 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerName="heat-engine" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.584156 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerName="heat-engine" Nov 24 21:30:13 crc kubenswrapper[4801]: E1124 21:30:13.584246 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" containerName="collect-profiles" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.584260 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" containerName="collect-profiles" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.584617 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" containerName="heat-engine" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.584679 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" containerName="collect-profiles" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.584691 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a3564b-3b89-49cc-b2e4-908ef21839d8" containerName="heat-cfnapi" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.587244 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.599297 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24gx6"] Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.695727 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-catalog-content\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.695967 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-utilities\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.695993 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkm4\" (UniqueName: \"kubernetes.io/projected/d841b081-9388-4039-8f24-1cb9f351e987-kube-api-access-gjkm4\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.798556 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-utilities\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.798621 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkm4\" (UniqueName: \"kubernetes.io/projected/d841b081-9388-4039-8f24-1cb9f351e987-kube-api-access-gjkm4\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.798713 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-catalog-content\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.799295 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-utilities\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.799391 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-catalog-content\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.817048 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkm4\" (UniqueName: \"kubernetes.io/projected/d841b081-9388-4039-8f24-1cb9f351e987-kube-api-access-gjkm4\") pod \"redhat-operators-24gx6\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:13 crc kubenswrapper[4801]: I1124 21:30:13.920294 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.466669 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24gx6"] Nov 24 21:30:14 crc kubenswrapper[4801]: W1124 21:30:14.467178 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd841b081_9388_4039_8f24_1cb9f351e987.slice/crio-e21bc09b40045387eba97b4c8cde8eeb3a5a3e2ee6f2adc1bded7a5971ec50cd WatchSource:0}: Error finding container e21bc09b40045387eba97b4c8cde8eeb3a5a3e2ee6f2adc1bded7a5971ec50cd: Status 404 returned error can't find the container with id e21bc09b40045387eba97b4c8cde8eeb3a5a3e2ee6f2adc1bded7a5971ec50cd Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.682217 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c299ae-e6d9-4828-abeb-0b0627b487bc" path="/var/lib/kubelet/pods/e0c299ae-e6d9-4828-abeb-0b0627b487bc/volumes" Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.955680 4801 generic.go:334] "Generic (PLEG): container finished" podID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerID="e43875aff6d0d4421bbea9ddb5a3718a3b9f98ed56c756aa3aa9cf4efe433f9c" exitCode=0 Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.955739 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerDied","Data":"e43875aff6d0d4421bbea9ddb5a3718a3b9f98ed56c756aa3aa9cf4efe433f9c"} Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.960458 4801 generic.go:334] "Generic (PLEG): container finished" podID="d841b081-9388-4039-8f24-1cb9f351e987" containerID="5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce" exitCode=0 Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.960521 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerDied","Data":"5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce"} Nov 24 21:30:14 crc kubenswrapper[4801]: I1124 21:30:14.960553 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerStarted","Data":"e21bc09b40045387eba97b4c8cde8eeb3a5a3e2ee6f2adc1bded7a5971ec50cd"} Nov 24 21:30:15 crc kubenswrapper[4801]: I1124 21:30:15.977054 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerStarted","Data":"74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613"} Nov 24 21:30:21 crc kubenswrapper[4801]: I1124 21:30:21.052100 4801 generic.go:334] "Generic (PLEG): container finished" podID="d841b081-9388-4039-8f24-1cb9f351e987" containerID="74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613" exitCode=0 Nov 24 21:30:21 crc kubenswrapper[4801]: I1124 21:30:21.052339 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerDied","Data":"74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613"} Nov 24 21:30:22 crc kubenswrapper[4801]: I1124 21:30:22.120507 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerStarted","Data":"46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362"} Nov 24 21:30:22 crc kubenswrapper[4801]: I1124 21:30:22.159798 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-24gx6" podStartSLOduration=2.67774263 podStartE2EDuration="9.159766186s" podCreationTimestamp="2025-11-24 21:30:13 +0000 UTC" firstStartedPulling="2025-11-24 21:30:14.963017452 +0000 UTC m=+1387.045604132" lastFinishedPulling="2025-11-24 21:30:21.445041018 +0000 UTC m=+1393.527627688" observedRunningTime="2025-11-24 21:30:22.152753339 +0000 UTC m=+1394.235340009" watchObservedRunningTime="2025-11-24 21:30:22.159766186 +0000 UTC m=+1394.242352856" Nov 24 21:30:23 crc kubenswrapper[4801]: I1124 21:30:23.920892 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:23 crc kubenswrapper[4801]: I1124 21:30:23.921305 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:24 crc kubenswrapper[4801]: I1124 21:30:24.979803 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-24gx6" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="registry-server" probeResult="failure" output=< Nov 24 21:30:24 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:30:24 crc kubenswrapper[4801]: > Nov 24 21:30:26 crc kubenswrapper[4801]: I1124 21:30:26.020551 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:30:26 crc kubenswrapper[4801]: I1124 21:30:26.229929 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m29qg" event={"ID":"ccceb717-3d31-47bd-a9af-983a5a247278","Type":"ContainerStarted","Data":"2c052e48e13ed5147d943bde248a33072b94719409e2e6e97569ed947aaa4f10"} Nov 24 21:30:26 crc kubenswrapper[4801]: I1124 21:30:26.373878 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m29qg" podStartSLOduration=3.47541512 podStartE2EDuration="37.373844392s" podCreationTimestamp="2025-11-24 21:29:49 +0000 UTC" firstStartedPulling="2025-11-24 21:29:51.665069537 +0000 UTC m=+1363.747656207" lastFinishedPulling="2025-11-24 21:30:25.563498809 +0000 UTC m=+1397.646085479" observedRunningTime="2025-11-24 21:30:26.250843809 +0000 UTC m=+1398.333430469" watchObservedRunningTime="2025-11-24 21:30:26.373844392 +0000 UTC m=+1398.456431062" Nov 24 21:30:33 crc kubenswrapper[4801]: I1124 21:30:33.992887 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:34 crc kubenswrapper[4801]: I1124 21:30:34.059003 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:34 crc kubenswrapper[4801]: I1124 21:30:34.236176 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24gx6"] Nov 24 21:30:35 crc kubenswrapper[4801]: I1124 21:30:35.333545 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-24gx6" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="registry-server" containerID="cri-o://46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362" gracePeriod=2 Nov 24 21:30:35 crc kubenswrapper[4801]: I1124 21:30:35.950408 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.041120 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-utilities\") pod \"d841b081-9388-4039-8f24-1cb9f351e987\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.041236 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjkm4\" (UniqueName: \"kubernetes.io/projected/d841b081-9388-4039-8f24-1cb9f351e987-kube-api-access-gjkm4\") pod \"d841b081-9388-4039-8f24-1cb9f351e987\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.041589 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-catalog-content\") pod \"d841b081-9388-4039-8f24-1cb9f351e987\" (UID: \"d841b081-9388-4039-8f24-1cb9f351e987\") " Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.042693 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-utilities" (OuterVolumeSpecName: "utilities") pod "d841b081-9388-4039-8f24-1cb9f351e987" (UID: "d841b081-9388-4039-8f24-1cb9f351e987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.050723 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d841b081-9388-4039-8f24-1cb9f351e987-kube-api-access-gjkm4" (OuterVolumeSpecName: "kube-api-access-gjkm4") pod "d841b081-9388-4039-8f24-1cb9f351e987" (UID: "d841b081-9388-4039-8f24-1cb9f351e987"). InnerVolumeSpecName "kube-api-access-gjkm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.145097 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d841b081-9388-4039-8f24-1cb9f351e987" (UID: "d841b081-9388-4039-8f24-1cb9f351e987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.145253 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.145284 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjkm4\" (UniqueName: \"kubernetes.io/projected/d841b081-9388-4039-8f24-1cb9f351e987-kube-api-access-gjkm4\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.247945 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d841b081-9388-4039-8f24-1cb9f351e987-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.346814 4801 generic.go:334] "Generic (PLEG): container finished" podID="d841b081-9388-4039-8f24-1cb9f351e987" containerID="46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362" exitCode=0 Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.346864 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24gx6" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.346876 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerDied","Data":"46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362"} Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.346953 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24gx6" event={"ID":"d841b081-9388-4039-8f24-1cb9f351e987","Type":"ContainerDied","Data":"e21bc09b40045387eba97b4c8cde8eeb3a5a3e2ee6f2adc1bded7a5971ec50cd"} Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.346974 4801 scope.go:117] "RemoveContainer" containerID="46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.387472 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24gx6"] Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.389001 4801 scope.go:117] "RemoveContainer" containerID="74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.398714 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-24gx6"] Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.420742 4801 scope.go:117] "RemoveContainer" containerID="5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.478745 4801 scope.go:117] "RemoveContainer" containerID="46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362" Nov 24 21:30:36 crc kubenswrapper[4801]: E1124 21:30:36.479418 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362\": container with ID starting with 46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362 not found: ID does not exist" containerID="46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.479447 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362"} err="failed to get container status \"46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362\": rpc error: code = NotFound desc = could not find container \"46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362\": container with ID starting with 46a107b3c63f84a3cde62023bd80b9dea4bf5925d900a690d9f966c3e4669362 not found: ID does not exist" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.479475 4801 scope.go:117] "RemoveContainer" containerID="74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613" Nov 24 21:30:36 crc kubenswrapper[4801]: E1124 21:30:36.479907 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613\": container with ID starting with 74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613 not found: ID does not exist" containerID="74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.479934 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613"} err="failed to get container status \"74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613\": rpc error: code = NotFound desc = could not find container \"74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613\": container with ID starting with 74cbb05e7c3074f8c4ef74e3fee25de4bfeceb385b216019af353ca787219613 not found: ID does not exist" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.479952 4801 scope.go:117] "RemoveContainer" containerID="5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce" Nov 24 21:30:36 crc kubenswrapper[4801]: E1124 21:30:36.480334 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce\": container with ID starting with 5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce not found: ID does not exist" containerID="5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.480500 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce"} err="failed to get container status \"5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce\": rpc error: code = NotFound desc = could not find container \"5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce\": container with ID starting with 5256297c1872300bdc7f9bbccabc4fd65ce9a69c27a8a8912e45169a4931a8ce not found: ID does not exist" Nov 24 21:30:36 crc kubenswrapper[4801]: I1124 21:30:36.678348 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d841b081-9388-4039-8f24-1cb9f351e987" path="/var/lib/kubelet/pods/d841b081-9388-4039-8f24-1cb9f351e987/volumes" Nov 24 21:30:39 crc kubenswrapper[4801]: I1124 21:30:39.387219 4801 generic.go:334] "Generic (PLEG): container finished" podID="ccceb717-3d31-47bd-a9af-983a5a247278" containerID="2c052e48e13ed5147d943bde248a33072b94719409e2e6e97569ed947aaa4f10" exitCode=0 Nov 24 21:30:39 crc kubenswrapper[4801]: I1124 21:30:39.387296 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m29qg" event={"ID":"ccceb717-3d31-47bd-a9af-983a5a247278","Type":"ContainerDied","Data":"2c052e48e13ed5147d943bde248a33072b94719409e2e6e97569ed947aaa4f10"} Nov 24 21:30:40 crc kubenswrapper[4801]: I1124 21:30:40.915347 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.024232 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-scripts\") pod \"ccceb717-3d31-47bd-a9af-983a5a247278\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.025449 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-config-data\") pod \"ccceb717-3d31-47bd-a9af-983a5a247278\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.025596 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf872\" (UniqueName: \"kubernetes.io/projected/ccceb717-3d31-47bd-a9af-983a5a247278-kube-api-access-gf872\") pod \"ccceb717-3d31-47bd-a9af-983a5a247278\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.025688 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-combined-ca-bundle\") pod \"ccceb717-3d31-47bd-a9af-983a5a247278\" (UID: \"ccceb717-3d31-47bd-a9af-983a5a247278\") " Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.036519 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-scripts" (OuterVolumeSpecName: "scripts") pod "ccceb717-3d31-47bd-a9af-983a5a247278" (UID: "ccceb717-3d31-47bd-a9af-983a5a247278"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.076650 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccceb717-3d31-47bd-a9af-983a5a247278-kube-api-access-gf872" (OuterVolumeSpecName: "kube-api-access-gf872") pod "ccceb717-3d31-47bd-a9af-983a5a247278" (UID: "ccceb717-3d31-47bd-a9af-983a5a247278"). InnerVolumeSpecName "kube-api-access-gf872". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.131532 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.131572 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf872\" (UniqueName: \"kubernetes.io/projected/ccceb717-3d31-47bd-a9af-983a5a247278-kube-api-access-gf872\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.154276 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccceb717-3d31-47bd-a9af-983a5a247278" (UID: "ccceb717-3d31-47bd-a9af-983a5a247278"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.190871 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-config-data" (OuterVolumeSpecName: "config-data") pod "ccceb717-3d31-47bd-a9af-983a5a247278" (UID: "ccceb717-3d31-47bd-a9af-983a5a247278"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.202352 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-v8w6x"] Nov 24 21:30:41 crc kubenswrapper[4801]: E1124 21:30:41.205601 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="extract-content" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.205747 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="extract-content" Nov 24 21:30:41 crc kubenswrapper[4801]: E1124 21:30:41.205853 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccceb717-3d31-47bd-a9af-983a5a247278" containerName="nova-cell0-conductor-db-sync" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.205918 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccceb717-3d31-47bd-a9af-983a5a247278" containerName="nova-cell0-conductor-db-sync" Nov 24 21:30:41 crc kubenswrapper[4801]: E1124 21:30:41.206005 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="registry-server" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.213334 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="registry-server" Nov 24 21:30:41 crc kubenswrapper[4801]: E1124 21:30:41.213587 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="extract-utilities" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.213700 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="extract-utilities" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.214245 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccceb717-3d31-47bd-a9af-983a5a247278" containerName="nova-cell0-conductor-db-sync" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.214354 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d841b081-9388-4039-8f24-1cb9f351e987" containerName="registry-server" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.215486 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.240094 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzpq\" (UniqueName: \"kubernetes.io/projected/687bf50f-6505-48a5-ae26-7b0fbaf6da04-kube-api-access-mxzpq\") pod \"aodh-db-create-v8w6x\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.240172 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687bf50f-6505-48a5-ae26-7b0fbaf6da04-operator-scripts\") pod \"aodh-db-create-v8w6x\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.240757 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.240794 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccceb717-3d31-47bd-a9af-983a5a247278-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.251890 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v8w6x"] Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.375590 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzpq\" (UniqueName: \"kubernetes.io/projected/687bf50f-6505-48a5-ae26-7b0fbaf6da04-kube-api-access-mxzpq\") pod \"aodh-db-create-v8w6x\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.375666 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687bf50f-6505-48a5-ae26-7b0fbaf6da04-operator-scripts\") pod \"aodh-db-create-v8w6x\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.376732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687bf50f-6505-48a5-ae26-7b0fbaf6da04-operator-scripts\") pod \"aodh-db-create-v8w6x\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.406545 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzpq\" (UniqueName: \"kubernetes.io/projected/687bf50f-6505-48a5-ae26-7b0fbaf6da04-kube-api-access-mxzpq\") pod \"aodh-db-create-v8w6x\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.421970 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-92dd-account-create-frmpq"] Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.426610 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m29qg" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.427616 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m29qg" event={"ID":"ccceb717-3d31-47bd-a9af-983a5a247278","Type":"ContainerDied","Data":"8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8"} Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.427678 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.427784 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.433978 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.463728 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-92dd-account-create-frmpq"] Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.479275 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj29k\" (UniqueName: \"kubernetes.io/projected/209d129f-a849-4435-b353-77db8299bb51-kube-api-access-pj29k\") pod \"aodh-92dd-account-create-frmpq\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.479609 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/209d129f-a849-4435-b353-77db8299bb51-operator-scripts\") pod \"aodh-92dd-account-create-frmpq\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.527965 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.530215 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.532231 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.534255 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m76d2" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.548832 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.583629 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2v7\" (UniqueName: \"kubernetes.io/projected/e4053b18-2499-4241-a2ad-5a673b22c6ae-kube-api-access-8p2v7\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.583801 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj29k\" (UniqueName: \"kubernetes.io/projected/209d129f-a849-4435-b353-77db8299bb51-kube-api-access-pj29k\") pod \"aodh-92dd-account-create-frmpq\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.583864 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4053b18-2499-4241-a2ad-5a673b22c6ae-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.584151 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/209d129f-a849-4435-b353-77db8299bb51-operator-scripts\") pod \"aodh-92dd-account-create-frmpq\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.584189 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4053b18-2499-4241-a2ad-5a673b22c6ae-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.585076 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/209d129f-a849-4435-b353-77db8299bb51-operator-scripts\") pod \"aodh-92dd-account-create-frmpq\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: E1124 21:30:41.597696 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccceb717_3d31_47bd_a9af_983a5a247278.slice/crio-8eaa460c8ff9b0462d290fd51c861c99985660a71d7552cdf8c5971aeb88faa8\": RecentStats: unable to find data in memory cache]" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.600556 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj29k\" (UniqueName: \"kubernetes.io/projected/209d129f-a849-4435-b353-77db8299bb51-kube-api-access-pj29k\") pod \"aodh-92dd-account-create-frmpq\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.624077 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.686706 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4053b18-2499-4241-a2ad-5a673b22c6ae-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.686850 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2v7\" (UniqueName: \"kubernetes.io/projected/e4053b18-2499-4241-a2ad-5a673b22c6ae-kube-api-access-8p2v7\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.686969 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4053b18-2499-4241-a2ad-5a673b22c6ae-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.694167 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4053b18-2499-4241-a2ad-5a673b22c6ae-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.695044 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4053b18-2499-4241-a2ad-5a673b22c6ae-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.706675 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2v7\" (UniqueName: \"kubernetes.io/projected/e4053b18-2499-4241-a2ad-5a673b22c6ae-kube-api-access-8p2v7\") pod \"nova-cell0-conductor-0\" (UID: \"e4053b18-2499-4241-a2ad-5a673b22c6ae\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.779220 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:41 crc kubenswrapper[4801]: I1124 21:30:41.868150 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.260266 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v8w6x"] Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.461579 4801 generic.go:334] "Generic (PLEG): container finished" podID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerID="ce9494a0c546be01945a4e4c01f7b03458a1ef7c0ef1687713ae3b191b612c03" exitCode=137 Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.462013 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerDied","Data":"ce9494a0c546be01945a4e4c01f7b03458a1ef7c0ef1687713ae3b191b612c03"} Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.464182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v8w6x" event={"ID":"687bf50f-6505-48a5-ae26-7b0fbaf6da04","Type":"ContainerStarted","Data":"29a69f989b85b8065376f98bfc75a3d3debdbadc7935ecf8b653c0c064a29c60"} Nov 24 21:30:42 crc kubenswrapper[4801]: W1124 21:30:42.469654 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod209d129f_a849_4435_b353_77db8299bb51.slice/crio-b828f9acff5c5a68ad00152f9fa43b3945f94128fc7055f2ddbfdaa2b9fcf14b WatchSource:0}: Error finding container b828f9acff5c5a68ad00152f9fa43b3945f94128fc7055f2ddbfdaa2b9fcf14b: Status 404 returned error can't find the container with id b828f9acff5c5a68ad00152f9fa43b3945f94128fc7055f2ddbfdaa2b9fcf14b Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.472626 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-92dd-account-create-frmpq"] Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.651645 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.697684 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.750418 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-run-httpd\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.750804 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-scripts\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.750937 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-combined-ca-bundle\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.751106 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s58tl\" (UniqueName: \"kubernetes.io/projected/cab5894e-5c00-4eaa-a829-9adcef352f4a-kube-api-access-s58tl\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.751293 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-config-data\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.751450 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-sg-core-conf-yaml\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.751560 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-log-httpd\") pod \"cab5894e-5c00-4eaa-a829-9adcef352f4a\" (UID: \"cab5894e-5c00-4eaa-a829-9adcef352f4a\") " Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.755903 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.759458 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.763039 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-scripts" (OuterVolumeSpecName: "scripts") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.765696 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab5894e-5c00-4eaa-a829-9adcef352f4a-kube-api-access-s58tl" (OuterVolumeSpecName: "kube-api-access-s58tl") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "kube-api-access-s58tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.818204 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.857856 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.857891 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.857900 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cab5894e-5c00-4eaa-a829-9adcef352f4a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.857910 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.857920 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s58tl\" (UniqueName: \"kubernetes.io/projected/cab5894e-5c00-4eaa-a829-9adcef352f4a-kube-api-access-s58tl\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.913986 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.931468 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-config-data" (OuterVolumeSpecName: "config-data") pod "cab5894e-5c00-4eaa-a829-9adcef352f4a" (UID: "cab5894e-5c00-4eaa-a829-9adcef352f4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.961783 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4801]: I1124 21:30:42.961964 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab5894e-5c00-4eaa-a829-9adcef352f4a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.479055 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e4053b18-2499-4241-a2ad-5a673b22c6ae","Type":"ContainerStarted","Data":"02fc250397407eeaa50ed98ad1583cd832c7dc348a441473435392c5382dff0d"} Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.479110 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e4053b18-2499-4241-a2ad-5a673b22c6ae","Type":"ContainerStarted","Data":"c71276514aa3705080c37a89de337e3f0e79854806531bb4793f52aa8921acb3"} Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.479323 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.480962 4801 generic.go:334] "Generic (PLEG): container finished" podID="209d129f-a849-4435-b353-77db8299bb51" containerID="e626d596dc6dad2c961980cb46a4b79dc85a7e7cebda9bed908a633904b537c4" exitCode=0 Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.481028 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-92dd-account-create-frmpq" event={"ID":"209d129f-a849-4435-b353-77db8299bb51","Type":"ContainerDied","Data":"e626d596dc6dad2c961980cb46a4b79dc85a7e7cebda9bed908a633904b537c4"} Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.481052 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-92dd-account-create-frmpq" event={"ID":"209d129f-a849-4435-b353-77db8299bb51","Type":"ContainerStarted","Data":"b828f9acff5c5a68ad00152f9fa43b3945f94128fc7055f2ddbfdaa2b9fcf14b"} Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.486004 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cab5894e-5c00-4eaa-a829-9adcef352f4a","Type":"ContainerDied","Data":"c83984475e055b32fa1ee390c5f429b0d818d9f0f8ffd2f96aee8472d60763a9"} Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.486067 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.486058 4801 scope.go:117] "RemoveContainer" containerID="ce9494a0c546be01945a4e4c01f7b03458a1ef7c0ef1687713ae3b191b612c03" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.489226 4801 generic.go:334] "Generic (PLEG): container finished" podID="687bf50f-6505-48a5-ae26-7b0fbaf6da04" containerID="4e51a3d93a1fc66c780608126f6c735a668c372583e6ae7a71b533335eb4c056" exitCode=0 Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.489272 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v8w6x" event={"ID":"687bf50f-6505-48a5-ae26-7b0fbaf6da04","Type":"ContainerDied","Data":"4e51a3d93a1fc66c780608126f6c735a668c372583e6ae7a71b533335eb4c056"} Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.514478 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.514438968 podStartE2EDuration="2.514438968s" podCreationTimestamp="2025-11-24 21:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:30:43.5038408 +0000 UTC m=+1415.586427540" watchObservedRunningTime="2025-11-24 21:30:43.514438968 +0000 UTC m=+1415.597025678" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.542547 4801 scope.go:117] "RemoveContainer" containerID="6c39a1fec9223210ce084f5e118b913213b323a0f2fe8fae3c6283d9e90aec99" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.590141 4801 scope.go:117] "RemoveContainer" containerID="04777bbd07611a03066280666de23330435a49d4ec38472f924de44fad095aaa" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.600096 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.618103 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.632849 4801 scope.go:117] "RemoveContainer" containerID="e43875aff6d0d4421bbea9ddb5a3718a3b9f98ed56c756aa3aa9cf4efe433f9c" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.633667 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:30:43 crc kubenswrapper[4801]: E1124 21:30:43.634313 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-central-agent" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634335 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-central-agent" Nov 24 21:30:43 crc kubenswrapper[4801]: E1124 21:30:43.634363 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="sg-core" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634384 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="sg-core" Nov 24 21:30:43 crc kubenswrapper[4801]: E1124 21:30:43.634421 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-notification-agent" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634427 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-notification-agent" Nov 24 21:30:43 crc kubenswrapper[4801]: E1124 21:30:43.634441 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="proxy-httpd" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634448 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="proxy-httpd" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634673 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="sg-core" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634693 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-central-agent" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634711 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="proxy-httpd" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.634731 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" containerName="ceilometer-notification-agent" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.637108 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.643574 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.649908 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.654879 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-config-data\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689236 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689274 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-scripts\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689392 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689435 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxjz\" (UniqueName: \"kubernetes.io/projected/c659af87-811b-415d-b2e4-06f6b228ec40-kube-api-access-xnxjz\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689510 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-run-httpd\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.689566 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-log-httpd\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.793877 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.793935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-scripts\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.794077 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.794112 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnxjz\" (UniqueName: \"kubernetes.io/projected/c659af87-811b-415d-b2e4-06f6b228ec40-kube-api-access-xnxjz\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.794213 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-run-httpd\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.794246 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-log-httpd\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.794409 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-config-data\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.797253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-run-httpd\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.797415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-log-httpd\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.854441 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnxjz\" (UniqueName: \"kubernetes.io/projected/c659af87-811b-415d-b2e4-06f6b228ec40-kube-api-access-xnxjz\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.869071 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.871959 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-config-data\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.880329 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.892151 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-scripts\") pod \"ceilometer-0\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " pod="openstack/ceilometer-0" Nov 24 21:30:43 crc kubenswrapper[4801]: I1124 21:30:43.962121 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.580126 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:30:44 crc kubenswrapper[4801]: W1124 21:30:44.612928 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc659af87_811b_415d_b2e4_06f6b228ec40.slice/crio-ecaf2bc5596901f7a94ef7a12a78abc7b5d0871ce21d607971bebbe4af4660a1 WatchSource:0}: Error finding container ecaf2bc5596901f7a94ef7a12a78abc7b5d0871ce21d607971bebbe4af4660a1: Status 404 returned error can't find the container with id ecaf2bc5596901f7a94ef7a12a78abc7b5d0871ce21d607971bebbe4af4660a1 Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.691979 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab5894e-5c00-4eaa-a829-9adcef352f4a" path="/var/lib/kubelet/pods/cab5894e-5c00-4eaa-a829-9adcef352f4a/volumes" Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.835105 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.966215 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687bf50f-6505-48a5-ae26-7b0fbaf6da04-operator-scripts\") pod \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.969566 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxzpq\" (UniqueName: \"kubernetes.io/projected/687bf50f-6505-48a5-ae26-7b0fbaf6da04-kube-api-access-mxzpq\") pod \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\" (UID: \"687bf50f-6505-48a5-ae26-7b0fbaf6da04\") " Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.968509 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687bf50f-6505-48a5-ae26-7b0fbaf6da04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "687bf50f-6505-48a5-ae26-7b0fbaf6da04" (UID: "687bf50f-6505-48a5-ae26-7b0fbaf6da04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:30:44 crc kubenswrapper[4801]: I1124 21:30:44.991054 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687bf50f-6505-48a5-ae26-7b0fbaf6da04-kube-api-access-mxzpq" (OuterVolumeSpecName: "kube-api-access-mxzpq") pod "687bf50f-6505-48a5-ae26-7b0fbaf6da04" (UID: "687bf50f-6505-48a5-ae26-7b0fbaf6da04"). InnerVolumeSpecName "kube-api-access-mxzpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.073242 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxzpq\" (UniqueName: \"kubernetes.io/projected/687bf50f-6505-48a5-ae26-7b0fbaf6da04-kube-api-access-mxzpq\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.073278 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687bf50f-6505-48a5-ae26-7b0fbaf6da04-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.097619 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.278044 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/209d129f-a849-4435-b353-77db8299bb51-operator-scripts\") pod \"209d129f-a849-4435-b353-77db8299bb51\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.278494 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj29k\" (UniqueName: \"kubernetes.io/projected/209d129f-a849-4435-b353-77db8299bb51-kube-api-access-pj29k\") pod \"209d129f-a849-4435-b353-77db8299bb51\" (UID: \"209d129f-a849-4435-b353-77db8299bb51\") " Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.278515 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209d129f-a849-4435-b353-77db8299bb51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "209d129f-a849-4435-b353-77db8299bb51" (UID: "209d129f-a849-4435-b353-77db8299bb51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.279356 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/209d129f-a849-4435-b353-77db8299bb51-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.286367 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209d129f-a849-4435-b353-77db8299bb51-kube-api-access-pj29k" (OuterVolumeSpecName: "kube-api-access-pj29k") pod "209d129f-a849-4435-b353-77db8299bb51" (UID: "209d129f-a849-4435-b353-77db8299bb51"). InnerVolumeSpecName "kube-api-access-pj29k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.382882 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj29k\" (UniqueName: \"kubernetes.io/projected/209d129f-a849-4435-b353-77db8299bb51-kube-api-access-pj29k\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.518363 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v8w6x" event={"ID":"687bf50f-6505-48a5-ae26-7b0fbaf6da04","Type":"ContainerDied","Data":"29a69f989b85b8065376f98bfc75a3d3debdbadc7935ecf8b653c0c064a29c60"} Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.518722 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a69f989b85b8065376f98bfc75a3d3debdbadc7935ecf8b653c0c064a29c60" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.518471 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v8w6x" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.520430 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-92dd-account-create-frmpq" event={"ID":"209d129f-a849-4435-b353-77db8299bb51","Type":"ContainerDied","Data":"b828f9acff5c5a68ad00152f9fa43b3945f94128fc7055f2ddbfdaa2b9fcf14b"} Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.520478 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b828f9acff5c5a68ad00152f9fa43b3945f94128fc7055f2ddbfdaa2b9fcf14b" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.520523 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-92dd-account-create-frmpq" Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.537308 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerStarted","Data":"1761fd7aa3e4422f5ecaa0d46762f6bc6d396846886ba477e6794e816f42d6bb"} Nov 24 21:30:45 crc kubenswrapper[4801]: I1124 21:30:45.537521 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerStarted","Data":"ecaf2bc5596901f7a94ef7a12a78abc7b5d0871ce21d607971bebbe4af4660a1"} Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.552995 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerStarted","Data":"75c5d57e9df2ef9a8214581c0d03e001704f8513bba187d48a30dd7053c20edb"} Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.625856 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-qszd4"] Nov 24 21:30:46 crc kubenswrapper[4801]: E1124 21:30:46.628093 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209d129f-a849-4435-b353-77db8299bb51" containerName="mariadb-account-create" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.628117 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="209d129f-a849-4435-b353-77db8299bb51" containerName="mariadb-account-create" Nov 24 21:30:46 crc kubenswrapper[4801]: E1124 21:30:46.628191 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687bf50f-6505-48a5-ae26-7b0fbaf6da04" containerName="mariadb-database-create" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.628219 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="687bf50f-6505-48a5-ae26-7b0fbaf6da04" containerName="mariadb-database-create" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.628489 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="209d129f-a849-4435-b353-77db8299bb51" containerName="mariadb-account-create" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.628513 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="687bf50f-6505-48a5-ae26-7b0fbaf6da04" containerName="mariadb-database-create" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.629847 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.633317 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.633803 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.634960 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5ndmv" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.634964 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.657261 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qszd4"] Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.738500 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsmpb\" (UniqueName: \"kubernetes.io/projected/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-kube-api-access-gsmpb\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.752761 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-config-data\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.752871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-combined-ca-bundle\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.752997 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-scripts\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.860248 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsmpb\" (UniqueName: \"kubernetes.io/projected/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-kube-api-access-gsmpb\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.860418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-config-data\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.860483 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-combined-ca-bundle\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.860572 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-scripts\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.867941 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-config-data\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.878886 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-combined-ca-bundle\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.880750 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsmpb\" (UniqueName: \"kubernetes.io/projected/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-kube-api-access-gsmpb\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:46 crc kubenswrapper[4801]: I1124 21:30:46.894971 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-scripts\") pod \"aodh-db-sync-qszd4\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:47 crc kubenswrapper[4801]: I1124 21:30:47.002494 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qszd4" Nov 24 21:30:47 crc kubenswrapper[4801]: I1124 21:30:47.566737 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerStarted","Data":"a44b71df825b9864b648a143841294c85e1531782759b972b33236f674c7629b"} Nov 24 21:30:47 crc kubenswrapper[4801]: I1124 21:30:47.660897 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qszd4"] Nov 24 21:30:47 crc kubenswrapper[4801]: W1124 21:30:47.663493 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e4ba8f9_6e5a_4073_b0f6_9311fb55d725.slice/crio-c202b9695bb1980ee5f88bfffe9ca649a50c9715aaa8424cfe705312ec0341e9 WatchSource:0}: Error finding container c202b9695bb1980ee5f88bfffe9ca649a50c9715aaa8424cfe705312ec0341e9: Status 404 returned error can't find the container with id c202b9695bb1980ee5f88bfffe9ca649a50c9715aaa8424cfe705312ec0341e9 Nov 24 21:30:48 crc kubenswrapper[4801]: I1124 21:30:48.578988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qszd4" event={"ID":"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725","Type":"ContainerStarted","Data":"c202b9695bb1980ee5f88bfffe9ca649a50c9715aaa8424cfe705312ec0341e9"} Nov 24 21:30:48 crc kubenswrapper[4801]: I1124 21:30:48.588341 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerStarted","Data":"0f400db3db232da24f319e76ccabb1b062d26a984778675019e2ed6fa749edcd"} Nov 24 21:30:48 crc kubenswrapper[4801]: I1124 21:30:48.588502 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:30:48 crc kubenswrapper[4801]: I1124 21:30:48.625852 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.35951938 podStartE2EDuration="5.625826042s" podCreationTimestamp="2025-11-24 21:30:43 +0000 UTC" firstStartedPulling="2025-11-24 21:30:44.658141866 +0000 UTC m=+1416.740728536" lastFinishedPulling="2025-11-24 21:30:47.924448528 +0000 UTC m=+1420.007035198" observedRunningTime="2025-11-24 21:30:48.617293937 +0000 UTC m=+1420.699880607" watchObservedRunningTime="2025-11-24 21:30:48.625826042 +0000 UTC m=+1420.708412712" Nov 24 21:30:51 crc kubenswrapper[4801]: I1124 21:30:51.910799 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.569317 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-prbd8"] Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.572082 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.581094 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.581106 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.606381 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-prbd8"] Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.745546 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.748262 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-config-data\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.748390 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.748418 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xrn\" (UniqueName: \"kubernetes.io/projected/ee263086-3eef-4ad1-903d-d7c18a90028f-kube-api-access-v7xrn\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.748583 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-scripts\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.749704 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.753490 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.798152 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.855849 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xrn\" (UniqueName: \"kubernetes.io/projected/ee263086-3eef-4ad1-903d-d7c18a90028f-kube-api-access-v7xrn\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856085 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhgf\" (UniqueName: \"kubernetes.io/projected/48c521a7-a1e5-47c3-81db-b37354be3c6b-kube-api-access-fxhgf\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-scripts\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856148 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c521a7-a1e5-47c3-81db-b37354be3c6b-logs\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856245 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-config-data\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856321 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-config-data\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.856380 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.867252 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.886574 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-scripts\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.899799 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xrn\" (UniqueName: \"kubernetes.io/projected/ee263086-3eef-4ad1-903d-d7c18a90028f-kube-api-access-v7xrn\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.938656 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-config-data\") pod \"nova-cell0-cell-mapping-prbd8\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.963333 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhgf\" (UniqueName: \"kubernetes.io/projected/48c521a7-a1e5-47c3-81db-b37354be3c6b-kube-api-access-fxhgf\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.967118 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.967198 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c521a7-a1e5-47c3-81db-b37354be3c6b-logs\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.967469 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-config-data\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.964706 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.972462 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c521a7-a1e5-47c3-81db-b37354be3c6b-logs\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:52 crc kubenswrapper[4801]: I1124 21:30:52.992664 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.008224 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.013777 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.050993 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-config-data\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.061193 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhgf\" (UniqueName: \"kubernetes.io/projected/48c521a7-a1e5-47c3-81db-b37354be3c6b-kube-api-access-fxhgf\") pod \"nova-api-0\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " pod="openstack/nova-api-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.075210 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.075539 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f13040-180f-4f76-b879-9256226fbd40-logs\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.075744 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-config-data\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.075924 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt84v\" (UniqueName: \"kubernetes.io/projected/d7f13040-180f-4f76-b879-9256226fbd40-kube-api-access-pt84v\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.144073 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.169874 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.194323 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt84v\" (UniqueName: \"kubernetes.io/projected/d7f13040-180f-4f76-b879-9256226fbd40-kube-api-access-pt84v\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.194418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.194590 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f13040-180f-4f76-b879-9256226fbd40-logs\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.208245 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-config-data\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.209930 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f13040-180f-4f76-b879-9256226fbd40-logs\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.211968 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.217069 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-config-data\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.227065 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.234505 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.248139 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.248275 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt84v\" (UniqueName: \"kubernetes.io/projected/d7f13040-180f-4f76-b879-9256226fbd40-kube-api-access-pt84v\") pod \"nova-metadata-0\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.251328 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.299525 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.355771 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-9c2vs"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.361551 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.365568 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.368077 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.370835 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.416753 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.416979 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2r5b\" (UniqueName: \"kubernetes.io/projected/5692084b-8a09-42b5-a3ff-606608cbad05-kube-api-access-f2r5b\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.417103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-config-data\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.422764 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.463996 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-9c2vs"] Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.481728 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.520356 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.520443 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-config\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.520881 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.520972 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521476 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521528 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2r5b\" (UniqueName: \"kubernetes.io/projected/5692084b-8a09-42b5-a3ff-606608cbad05-kube-api-access-f2r5b\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521553 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521606 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t2tq\" (UniqueName: \"kubernetes.io/projected/1915439f-0e90-491e-8949-f895b1483935-kube-api-access-7t2tq\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521742 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfr99\" (UniqueName: \"kubernetes.io/projected/3c39cc9a-240b-4624-a0cb-33ea4538dcad-kube-api-access-qfr99\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521796 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-config-data\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.521938 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-svc\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.526888 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-config-data\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.537186 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.550333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2r5b\" (UniqueName: \"kubernetes.io/projected/5692084b-8a09-42b5-a3ff-606608cbad05-kube-api-access-f2r5b\") pod \"nova-scheduler-0\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.625084 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.625168 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.625199 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t2tq\" (UniqueName: \"kubernetes.io/projected/1915439f-0e90-491e-8949-f895b1483935-kube-api-access-7t2tq\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.625249 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfr99\" (UniqueName: \"kubernetes.io/projected/3c39cc9a-240b-4624-a0cb-33ea4538dcad-kube-api-access-qfr99\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.625309 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-svc\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.625340 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.626399 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.626417 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-svc\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.626480 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-config\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.626831 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.626927 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.626990 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.627513 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.628003 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-config\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.628245 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.645545 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.645805 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.647203 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t2tq\" (UniqueName: \"kubernetes.io/projected/1915439f-0e90-491e-8949-f895b1483935-kube-api-access-7t2tq\") pod \"dnsmasq-dns-9b86998b5-9c2vs\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.656593 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfr99\" (UniqueName: \"kubernetes.io/projected/3c39cc9a-240b-4624-a0cb-33ea4538dcad-kube-api-access-qfr99\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.692397 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:53 crc kubenswrapper[4801]: I1124 21:30:53.701062 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.584427 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r8zks"] Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.586982 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.591244 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.591596 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.609306 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r8zks"] Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.760818 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.760929 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-scripts\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.760966 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlkz\" (UniqueName: \"kubernetes.io/projected/cf84f5e0-9d3c-4023-80f7-e84a6c810221-kube-api-access-znlkz\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.760997 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-config-data\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.867727 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.873996 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-scripts\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.874148 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znlkz\" (UniqueName: \"kubernetes.io/projected/cf84f5e0-9d3c-4023-80f7-e84a6c810221-kube-api-access-znlkz\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.874264 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-config-data\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.908955 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-scripts\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.911873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-config-data\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.919125 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:54 crc kubenswrapper[4801]: I1124 21:30:54.923304 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znlkz\" (UniqueName: \"kubernetes.io/projected/cf84f5e0-9d3c-4023-80f7-e84a6c810221-kube-api-access-znlkz\") pod \"nova-cell1-conductor-db-sync-r8zks\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:55 crc kubenswrapper[4801]: I1124 21:30:55.010520 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:30:55 crc kubenswrapper[4801]: I1124 21:30:55.829698 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qszd4" event={"ID":"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725","Type":"ContainerStarted","Data":"44f09d6a53a36e24fcb7eeacafa0ccbed2f5bddd32222f5bdb242ca91f22cf99"} Nov 24 21:30:55 crc kubenswrapper[4801]: I1124 21:30:55.955792 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:30:55 crc kubenswrapper[4801]: W1124 21:30:55.957704 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5692084b_8a09_42b5_a3ff_606608cbad05.slice/crio-2acf96eb00eb80846f32a2cdc7a9d477c8dd8adf9cc8b01247e7031b7769fe07 WatchSource:0}: Error finding container 2acf96eb00eb80846f32a2cdc7a9d477c8dd8adf9cc8b01247e7031b7769fe07: Status 404 returned error can't find the container with id 2acf96eb00eb80846f32a2cdc7a9d477c8dd8adf9cc8b01247e7031b7769fe07 Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.068810 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-9c2vs"] Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.085070 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-prbd8"] Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.096482 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.099225 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-qszd4" podStartSLOduration=3.180387174 podStartE2EDuration="10.099017367s" podCreationTimestamp="2025-11-24 21:30:46 +0000 UTC" firstStartedPulling="2025-11-24 21:30:47.669194774 +0000 UTC m=+1419.751781444" lastFinishedPulling="2025-11-24 21:30:54.587824967 +0000 UTC m=+1426.670411637" observedRunningTime="2025-11-24 21:30:55.855485897 +0000 UTC m=+1427.938072567" watchObservedRunningTime="2025-11-24 21:30:56.099017367 +0000 UTC m=+1428.181604037" Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.163088 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.195323 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r8zks"] Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.223451 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.847544 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c39cc9a-240b-4624-a0cb-33ea4538dcad","Type":"ContainerStarted","Data":"66df7401192322d21988d03c59a909136779026f97737a1e84eced81db36fbc3"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.851584 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r8zks" event={"ID":"cf84f5e0-9d3c-4023-80f7-e84a6c810221","Type":"ContainerStarted","Data":"fe910383ce98b07da84698872943eb54c77b103af981085060792eb29cea0f11"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.851612 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r8zks" event={"ID":"cf84f5e0-9d3c-4023-80f7-e84a6c810221","Type":"ContainerStarted","Data":"e07ffcede915dc7d0addeb432bbf4a65e0cd85a9bfe2914b6c1b3b5d4d872991"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.855550 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5692084b-8a09-42b5-a3ff-606608cbad05","Type":"ContainerStarted","Data":"2acf96eb00eb80846f32a2cdc7a9d477c8dd8adf9cc8b01247e7031b7769fe07"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.857401 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7f13040-180f-4f76-b879-9256226fbd40","Type":"ContainerStarted","Data":"19d4b6922771a7037b46dc335b92fd2d441285eabc8197e2802d54166ddd4054"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.859039 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48c521a7-a1e5-47c3-81db-b37354be3c6b","Type":"ContainerStarted","Data":"c7fb4c752e10def9915ae4aa78ebb4b327d77f7c5a4a8d7dab864a02d9f9ab59"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.861482 4801 generic.go:334] "Generic (PLEG): container finished" podID="1915439f-0e90-491e-8949-f895b1483935" containerID="d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761" exitCode=0 Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.861547 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" event={"ID":"1915439f-0e90-491e-8949-f895b1483935","Type":"ContainerDied","Data":"d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.861578 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" event={"ID":"1915439f-0e90-491e-8949-f895b1483935","Type":"ContainerStarted","Data":"c77049ef29118c8840d8abbf79b3b9d09f36520f20e791be3dbef7c6f65394e2"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.874699 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-prbd8" event={"ID":"ee263086-3eef-4ad1-903d-d7c18a90028f","Type":"ContainerStarted","Data":"eb3334215a7d6b369544df16d3f5062ed4500b8177d1f1fd71e4d0f83bd39a88"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.874975 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-prbd8" event={"ID":"ee263086-3eef-4ad1-903d-d7c18a90028f","Type":"ContainerStarted","Data":"6993325ed4dc0e3570f60508be035397e21c5141e20084e9d30d73cd66593edb"} Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.879644 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-r8zks" podStartSLOduration=2.879620067 podStartE2EDuration="2.879620067s" podCreationTimestamp="2025-11-24 21:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:30:56.875788568 +0000 UTC m=+1428.958375228" watchObservedRunningTime="2025-11-24 21:30:56.879620067 +0000 UTC m=+1428.962206737" Nov 24 21:30:56 crc kubenswrapper[4801]: I1124 21:30:56.906859 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-prbd8" podStartSLOduration=4.90682763 podStartE2EDuration="4.90682763s" podCreationTimestamp="2025-11-24 21:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:30:56.897331906 +0000 UTC m=+1428.979918576" watchObservedRunningTime="2025-11-24 21:30:56.90682763 +0000 UTC m=+1428.989414310" Nov 24 21:30:57 crc kubenswrapper[4801]: I1124 21:30:57.231748 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:30:57 crc kubenswrapper[4801]: I1124 21:30:57.275851 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:30:57 crc kubenswrapper[4801]: I1124 21:30:57.916198 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" event={"ID":"1915439f-0e90-491e-8949-f895b1483935","Type":"ContainerStarted","Data":"7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a"} Nov 24 21:30:57 crc kubenswrapper[4801]: I1124 21:30:57.953345 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" podStartSLOduration=5.953318214 podStartE2EDuration="5.953318214s" podCreationTimestamp="2025-11-24 21:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:30:57.945678297 +0000 UTC m=+1430.028264967" watchObservedRunningTime="2025-11-24 21:30:57.953318214 +0000 UTC m=+1430.035904884" Nov 24 21:30:58 crc kubenswrapper[4801]: I1124 21:30:58.695438 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:30:59 crc kubenswrapper[4801]: I1124 21:30:59.958684 4801 generic.go:334] "Generic (PLEG): container finished" podID="9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" containerID="44f09d6a53a36e24fcb7eeacafa0ccbed2f5bddd32222f5bdb242ca91f22cf99" exitCode=0 Nov 24 21:30:59 crc kubenswrapper[4801]: I1124 21:30:59.958909 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qszd4" event={"ID":"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725","Type":"ContainerDied","Data":"44f09d6a53a36e24fcb7eeacafa0ccbed2f5bddd32222f5bdb242ca91f22cf99"} Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.410574 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sgmc9"] Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.416260 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.452947 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgmc9"] Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.470053 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-catalog-content\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.470308 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-utilities\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.470691 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcjr\" (UniqueName: \"kubernetes.io/projected/cc8afac2-234d-4592-b1c8-632359844ce5-kube-api-access-5rcjr\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.573813 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-catalog-content\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.573993 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-utilities\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.574021 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcjr\" (UniqueName: \"kubernetes.io/projected/cc8afac2-234d-4592-b1c8-632359844ce5-kube-api-access-5rcjr\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.575227 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-utilities\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.575460 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-catalog-content\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.598575 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcjr\" (UniqueName: \"kubernetes.io/projected/cc8afac2-234d-4592-b1c8-632359844ce5-kube-api-access-5rcjr\") pod \"redhat-marketplace-sgmc9\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:00 crc kubenswrapper[4801]: I1124 21:31:00.782808 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:01 crc kubenswrapper[4801]: I1124 21:31:00.996541 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48c521a7-a1e5-47c3-81db-b37354be3c6b","Type":"ContainerStarted","Data":"1c2ffeaab5783823801b1f3e7251892d26113aaad7be9582bc6f4d77436f3855"} Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.088900 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c39cc9a-240b-4624-a0cb-33ea4538dcad","Type":"ContainerStarted","Data":"2d46854f650c0a13b4553febefc7a37edcebf2daf2c9d4d39e15235419a2396d"} Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.089840 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3c39cc9a-240b-4624-a0cb-33ea4538dcad" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2d46854f650c0a13b4553febefc7a37edcebf2daf2c9d4d39e15235419a2396d" gracePeriod=30 Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.096537 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5692084b-8a09-42b5-a3ff-606608cbad05","Type":"ContainerStarted","Data":"c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a"} Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.121725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7f13040-180f-4f76-b879-9256226fbd40","Type":"ContainerStarted","Data":"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a"} Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.121799 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7f13040-180f-4f76-b879-9256226fbd40","Type":"ContainerStarted","Data":"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8"} Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.121833 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-log" containerID="cri-o://e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a" gracePeriod=30 Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.121919 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-metadata" containerID="cri-o://3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8" gracePeriod=30 Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.138606 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48c521a7-a1e5-47c3-81db-b37354be3c6b","Type":"ContainerStarted","Data":"3c1e468dcd1d461516e4f8b755b6acc5a33e1ae8f4f4fc88d7bc67e2a38bc316"} Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.140787 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.086948367 podStartE2EDuration="9.140760684s" podCreationTimestamp="2025-11-24 21:30:53 +0000 UTC" firstStartedPulling="2025-11-24 21:30:56.23974395 +0000 UTC m=+1428.322330620" lastFinishedPulling="2025-11-24 21:31:00.293556267 +0000 UTC m=+1432.376142937" observedRunningTime="2025-11-24 21:31:02.124033235 +0000 UTC m=+1434.206619905" watchObservedRunningTime="2025-11-24 21:31:02.140760684 +0000 UTC m=+1434.223347354" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.171082 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.789240087 podStartE2EDuration="10.171047952s" podCreationTimestamp="2025-11-24 21:30:52 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.902781103 +0000 UTC m=+1427.985367773" lastFinishedPulling="2025-11-24 21:31:00.284588968 +0000 UTC m=+1432.367175638" observedRunningTime="2025-11-24 21:31:02.152853418 +0000 UTC m=+1434.235440098" watchObservedRunningTime="2025-11-24 21:31:02.171047952 +0000 UTC m=+1434.253634622" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.199686 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.883668676 podStartE2EDuration="10.19965479s" podCreationTimestamp="2025-11-24 21:30:52 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.971182354 +0000 UTC m=+1428.053769024" lastFinishedPulling="2025-11-24 21:31:00.287168468 +0000 UTC m=+1432.369755138" observedRunningTime="2025-11-24 21:31:02.176882603 +0000 UTC m=+1434.259469273" watchObservedRunningTime="2025-11-24 21:31:02.19965479 +0000 UTC m=+1434.282241470" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.242169 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgmc9"] Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.256788 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.876602084 podStartE2EDuration="10.256754479s" podCreationTimestamp="2025-11-24 21:30:52 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.910034317 +0000 UTC m=+1427.992620987" lastFinishedPulling="2025-11-24 21:31:00.290186692 +0000 UTC m=+1432.372773382" observedRunningTime="2025-11-24 21:31:02.224726707 +0000 UTC m=+1434.307313377" watchObservedRunningTime="2025-11-24 21:31:02.256754479 +0000 UTC m=+1434.339341149" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.530023 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qszd4" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.584036 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-scripts\") pod \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.584184 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-config-data\") pod \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.584261 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-combined-ca-bundle\") pod \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.584484 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsmpb\" (UniqueName: \"kubernetes.io/projected/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-kube-api-access-gsmpb\") pod \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\" (UID: \"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725\") " Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.595323 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-kube-api-access-gsmpb" (OuterVolumeSpecName: "kube-api-access-gsmpb") pod "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" (UID: "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725"). InnerVolumeSpecName "kube-api-access-gsmpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.600888 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-scripts" (OuterVolumeSpecName: "scripts") pod "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" (UID: "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.652042 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" (UID: "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.676691 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-config-data" (OuterVolumeSpecName: "config-data") pod "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" (UID: "9e4ba8f9-6e5a-4073-b0f6-9311fb55d725"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.689129 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.689171 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.689185 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:02 crc kubenswrapper[4801]: I1124 21:31:02.689197 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsmpb\" (UniqueName: \"kubernetes.io/projected/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725-kube-api-access-gsmpb\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.101150 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.166740 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.166789 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.186800 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qszd4" event={"ID":"9e4ba8f9-6e5a-4073-b0f6-9311fb55d725","Type":"ContainerDied","Data":"c202b9695bb1980ee5f88bfffe9ca649a50c9715aaa8424cfe705312ec0341e9"} Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.186859 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c202b9695bb1980ee5f88bfffe9ca649a50c9715aaa8424cfe705312ec0341e9" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.186968 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qszd4" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200435 4801 generic.go:334] "Generic (PLEG): container finished" podID="d7f13040-180f-4f76-b879-9256226fbd40" containerID="3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8" exitCode=0 Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200483 4801 generic.go:334] "Generic (PLEG): container finished" podID="d7f13040-180f-4f76-b879-9256226fbd40" containerID="e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a" exitCode=143 Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200566 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7f13040-180f-4f76-b879-9256226fbd40","Type":"ContainerDied","Data":"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8"} Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200578 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7f13040-180f-4f76-b879-9256226fbd40","Type":"ContainerDied","Data":"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a"} Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200632 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7f13040-180f-4f76-b879-9256226fbd40","Type":"ContainerDied","Data":"19d4b6922771a7037b46dc335b92fd2d441285eabc8197e2802d54166ddd4054"} Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.200654 4801 scope.go:117] "RemoveContainer" containerID="3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.204706 4801 generic.go:334] "Generic (PLEG): container finished" podID="cc8afac2-234d-4592-b1c8-632359844ce5" containerID="b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365" exitCode=0 Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.205519 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerDied","Data":"b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365"} Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.205578 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerStarted","Data":"c6b596a298506b2f49a3e921a2debc71c054c5d9276e10dc902c79ff11587d6e"} Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.219991 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-config-data\") pod \"d7f13040-180f-4f76-b879-9256226fbd40\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.223513 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-combined-ca-bundle\") pod \"d7f13040-180f-4f76-b879-9256226fbd40\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.223888 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f13040-180f-4f76-b879-9256226fbd40-logs\") pod \"d7f13040-180f-4f76-b879-9256226fbd40\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.224039 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt84v\" (UniqueName: \"kubernetes.io/projected/d7f13040-180f-4f76-b879-9256226fbd40-kube-api-access-pt84v\") pod \"d7f13040-180f-4f76-b879-9256226fbd40\" (UID: \"d7f13040-180f-4f76-b879-9256226fbd40\") " Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.224562 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f13040-180f-4f76-b879-9256226fbd40-logs" (OuterVolumeSpecName: "logs") pod "d7f13040-180f-4f76-b879-9256226fbd40" (UID: "d7f13040-180f-4f76-b879-9256226fbd40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.226104 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f13040-180f-4f76-b879-9256226fbd40-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.242677 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f13040-180f-4f76-b879-9256226fbd40-kube-api-access-pt84v" (OuterVolumeSpecName: "kube-api-access-pt84v") pod "d7f13040-180f-4f76-b879-9256226fbd40" (UID: "d7f13040-180f-4f76-b879-9256226fbd40"). InnerVolumeSpecName "kube-api-access-pt84v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.266801 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7f13040-180f-4f76-b879-9256226fbd40" (UID: "d7f13040-180f-4f76-b879-9256226fbd40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.273869 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-config-data" (OuterVolumeSpecName: "config-data") pod "d7f13040-180f-4f76-b879-9256226fbd40" (UID: "d7f13040-180f-4f76-b879-9256226fbd40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.300774 4801 scope.go:117] "RemoveContainer" containerID="e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.336378 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.336710 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f13040-180f-4f76-b879-9256226fbd40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.336770 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt84v\" (UniqueName: \"kubernetes.io/projected/d7f13040-180f-4f76-b879-9256226fbd40-kube-api-access-pt84v\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.341882 4801 scope.go:117] "RemoveContainer" containerID="3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8" Nov 24 21:31:03 crc kubenswrapper[4801]: E1124 21:31:03.343239 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8\": container with ID starting with 3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8 not found: ID does not exist" containerID="3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.343383 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8"} err="failed to get container status \"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8\": rpc error: code = NotFound desc = could not find container \"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8\": container with ID starting with 3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8 not found: ID does not exist" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.343494 4801 scope.go:117] "RemoveContainer" containerID="e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a" Nov 24 21:31:03 crc kubenswrapper[4801]: E1124 21:31:03.344560 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a\": container with ID starting with e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a not found: ID does not exist" containerID="e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.344608 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a"} err="failed to get container status \"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a\": rpc error: code = NotFound desc = could not find container \"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a\": container with ID starting with e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a not found: ID does not exist" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.344651 4801 scope.go:117] "RemoveContainer" containerID="3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.349348 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8"} err="failed to get container status \"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8\": rpc error: code = NotFound desc = could not find container \"3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8\": container with ID starting with 3cd9f9878c73ab0f8e8d3211e955145030a052658befe7abd5d185c8156db0c8 not found: ID does not exist" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.350505 4801 scope.go:117] "RemoveContainer" containerID="e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.351294 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a"} err="failed to get container status \"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a\": rpc error: code = NotFound desc = could not find container \"e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a\": container with ID starting with e52060492bcb11cd02c77422ca9cb18411696577aab261cc9a496c305c7cb08a not found: ID does not exist" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.580434 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.602189 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.629753 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.629805 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.631972 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:03 crc kubenswrapper[4801]: E1124 21:31:03.632605 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-log" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.632624 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-log" Nov 24 21:31:03 crc kubenswrapper[4801]: E1124 21:31:03.632647 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-metadata" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.632653 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-metadata" Nov 24 21:31:03 crc kubenswrapper[4801]: E1124 21:31:03.632668 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" containerName="aodh-db-sync" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.632674 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" containerName="aodh-db-sync" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.632905 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-metadata" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.632924 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" containerName="aodh-db-sync" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.632946 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f13040-180f-4f76-b879-9256226fbd40" containerName="nova-metadata-log" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.634756 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.639865 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.640071 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.648616 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.669198 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.669268 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b11296-213d-4d86-8113-152e26f97ed6-logs\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.669289 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-config-data\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.669318 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.669482 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89nnm\" (UniqueName: \"kubernetes.io/projected/19b11296-213d-4d86-8113-152e26f97ed6-kube-api-access-89nnm\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.686503 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.694581 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.701523 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.772684 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89nnm\" (UniqueName: \"kubernetes.io/projected/19b11296-213d-4d86-8113-152e26f97ed6-kube-api-access-89nnm\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.774915 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.775101 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b11296-213d-4d86-8113-152e26f97ed6-logs\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.775244 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-config-data\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.775578 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.776041 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b11296-213d-4d86-8113-152e26f97ed6-logs\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.791687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.794747 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.811980 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89nnm\" (UniqueName: \"kubernetes.io/projected/19b11296-213d-4d86-8113-152e26f97ed6-kube-api-access-89nnm\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.859187 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-config-data\") pod \"nova-metadata-0\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " pod="openstack/nova-metadata-0" Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.946983 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-99chz"] Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.947293 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerName="dnsmasq-dns" containerID="cri-o://81194875a4db0e96d7575732dc9ac2712fe2777611fb98bce9815e63167fca24" gracePeriod=10 Nov 24 21:31:03 crc kubenswrapper[4801]: I1124 21:31:03.958582 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.249988 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.251525 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.266819 4801 generic.go:334] "Generic (PLEG): container finished" podID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerID="81194875a4db0e96d7575732dc9ac2712fe2777611fb98bce9815e63167fca24" exitCode=0 Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.266976 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" event={"ID":"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7","Type":"ContainerDied","Data":"81194875a4db0e96d7575732dc9ac2712fe2777611fb98bce9815e63167fca24"} Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.398077 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:31:04 crc kubenswrapper[4801]: W1124 21:31:04.691465 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19b11296_213d_4d86_8113_152e26f97ed6.slice/crio-bee5a8c2d13b1dc6b76044ede03128dd32e60586092afb661423a4c2c0213ab0 WatchSource:0}: Error finding container bee5a8c2d13b1dc6b76044ede03128dd32e60586092afb661423a4c2c0213ab0: Status 404 returned error can't find the container with id bee5a8c2d13b1dc6b76044ede03128dd32e60586092afb661423a4c2c0213ab0 Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.780331 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f13040-180f-4f76-b879-9256226fbd40" path="/var/lib/kubelet/pods/d7f13040-180f-4f76-b879-9256226fbd40/volumes" Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.784199 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.825547 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.924478 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mppqt\" (UniqueName: \"kubernetes.io/projected/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-kube-api-access-mppqt\") pod \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.924814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-svc\") pod \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.925089 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-swift-storage-0\") pod \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.925125 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-sb\") pod \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.925164 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-nb\") pod \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.925197 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-config\") pod \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\" (UID: \"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7\") " Nov 24 21:31:04 crc kubenswrapper[4801]: I1124 21:31:04.940905 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-kube-api-access-mppqt" (OuterVolumeSpecName: "kube-api-access-mppqt") pod "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" (UID: "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7"). InnerVolumeSpecName "kube-api-access-mppqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.032945 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mppqt\" (UniqueName: \"kubernetes.io/projected/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-kube-api-access-mppqt\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.090247 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" (UID: "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.126158 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" (UID: "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.135678 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.135721 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.157842 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-config" (OuterVolumeSpecName: "config") pod "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" (UID: "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.164858 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" (UID: "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.177016 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" (UID: "09fe2c93-649c-4d4c-8615-e5fbb04f5fd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.242751 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.243099 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.243108 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.323907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" event={"ID":"09fe2c93-649c-4d4c-8615-e5fbb04f5fd7","Type":"ContainerDied","Data":"2e17b2ea0687484b698ce7586fc3cf7e08c8949d52ffc2861c399165f726bf02"} Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.323989 4801 scope.go:117] "RemoveContainer" containerID="81194875a4db0e96d7575732dc9ac2712fe2777611fb98bce9815e63167fca24" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.325752 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-99chz" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.349087 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerStarted","Data":"26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3"} Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.351265 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19b11296-213d-4d86-8113-152e26f97ed6","Type":"ContainerStarted","Data":"bee5a8c2d13b1dc6b76044ede03128dd32e60586092afb661423a4c2c0213ab0"} Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.412795 4801 scope.go:117] "RemoveContainer" containerID="20981b6b52523d658876b84226b0a4412d0256c60020c536e52669358693b36e" Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.459700 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-99chz"] Nov 24 21:31:05 crc kubenswrapper[4801]: I1124 21:31:05.473489 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-99chz"] Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.292560 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:06 crc kubenswrapper[4801]: E1124 21:31:06.293509 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerName="dnsmasq-dns" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.293529 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerName="dnsmasq-dns" Nov 24 21:31:06 crc kubenswrapper[4801]: E1124 21:31:06.293577 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerName="init" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.293585 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerName="init" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.293828 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" containerName="dnsmasq-dns" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.296301 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.308377 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.309631 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.312340 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5ndmv" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.318284 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.379141 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19b11296-213d-4d86-8113-152e26f97ed6","Type":"ContainerStarted","Data":"1498aa8684b489dcc39c032c4b12c843cca093581e55996c963d68f1158dc07b"} Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.379200 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19b11296-213d-4d86-8113-152e26f97ed6","Type":"ContainerStarted","Data":"d179597e01733ae376bfba67b49db9a8876a49cdc2ff8361bfffb41f5808cde3"} Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.384663 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4bm\" (UniqueName: \"kubernetes.io/projected/893d7bdd-5169-4d6e-8a02-9a0ac5047308-kube-api-access-vj4bm\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.384734 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-config-data\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.384770 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-scripts\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.384889 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-combined-ca-bundle\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.397558 4801 generic.go:334] "Generic (PLEG): container finished" podID="cc8afac2-234d-4592-b1c8-632359844ce5" containerID="26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3" exitCode=0 Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.398850 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerDied","Data":"26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3"} Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.421123 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.421104383 podStartE2EDuration="3.421104383s" podCreationTimestamp="2025-11-24 21:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:06.413969592 +0000 UTC m=+1438.496556262" watchObservedRunningTime="2025-11-24 21:31:06.421104383 +0000 UTC m=+1438.503691053" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.495216 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-combined-ca-bundle\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.495857 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4bm\" (UniqueName: \"kubernetes.io/projected/893d7bdd-5169-4d6e-8a02-9a0ac5047308-kube-api-access-vj4bm\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.496897 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-config-data\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.497051 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-scripts\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.509851 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-scripts\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.510863 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-combined-ca-bundle\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.514208 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-config-data\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.515406 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4bm\" (UniqueName: \"kubernetes.io/projected/893d7bdd-5169-4d6e-8a02-9a0ac5047308-kube-api-access-vj4bm\") pod \"aodh-0\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.623877 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:31:06 crc kubenswrapper[4801]: I1124 21:31:06.689230 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09fe2c93-649c-4d4c-8615-e5fbb04f5fd7" path="/var/lib/kubelet/pods/09fe2c93-649c-4d4c-8615-e5fbb04f5fd7/volumes" Nov 24 21:31:07 crc kubenswrapper[4801]: I1124 21:31:07.359914 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:07 crc kubenswrapper[4801]: I1124 21:31:07.415666 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerStarted","Data":"7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc"} Nov 24 21:31:07 crc kubenswrapper[4801]: I1124 21:31:07.420680 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerStarted","Data":"8f3da7a3c11af05baaed914965eb4967148ffe31ebd910768f018f94a7a3136b"} Nov 24 21:31:07 crc kubenswrapper[4801]: I1124 21:31:07.449630 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sgmc9" podStartSLOduration=3.727219548 podStartE2EDuration="7.449592199s" podCreationTimestamp="2025-11-24 21:31:00 +0000 UTC" firstStartedPulling="2025-11-24 21:31:03.212660345 +0000 UTC m=+1435.295247015" lastFinishedPulling="2025-11-24 21:31:06.935032996 +0000 UTC m=+1439.017619666" observedRunningTime="2025-11-24 21:31:07.44284399 +0000 UTC m=+1439.525430670" watchObservedRunningTime="2025-11-24 21:31:07.449592199 +0000 UTC m=+1439.532178869" Nov 24 21:31:08 crc kubenswrapper[4801]: I1124 21:31:08.962754 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:31:08 crc kubenswrapper[4801]: I1124 21:31:08.964741 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:31:09 crc kubenswrapper[4801]: I1124 21:31:09.451461 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerStarted","Data":"99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07"} Nov 24 21:31:09 crc kubenswrapper[4801]: I1124 21:31:09.637798 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.469431 4801 generic.go:334] "Generic (PLEG): container finished" podID="ee263086-3eef-4ad1-903d-d7c18a90028f" containerID="eb3334215a7d6b369544df16d3f5062ed4500b8177d1f1fd71e4d0f83bd39a88" exitCode=0 Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.469510 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-prbd8" event={"ID":"ee263086-3eef-4ad1-903d-d7c18a90028f","Type":"ContainerDied","Data":"eb3334215a7d6b369544df16d3f5062ed4500b8177d1f1fd71e4d0f83bd39a88"} Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.473420 4801 generic.go:334] "Generic (PLEG): container finished" podID="cf84f5e0-9d3c-4023-80f7-e84a6c810221" containerID="fe910383ce98b07da84698872943eb54c77b103af981085060792eb29cea0f11" exitCode=0 Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.473571 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r8zks" event={"ID":"cf84f5e0-9d3c-4023-80f7-e84a6c810221","Type":"ContainerDied","Data":"fe910383ce98b07da84698872943eb54c77b103af981085060792eb29cea0f11"} Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.479184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerStarted","Data":"7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3"} Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.633074 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.636782 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="proxy-httpd" containerID="cri-o://0f400db3db232da24f319e76ccabb1b062d26a984778675019e2ed6fa749edcd" gracePeriod=30 Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.637152 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-central-agent" containerID="cri-o://1761fd7aa3e4422f5ecaa0d46762f6bc6d396846886ba477e6794e816f42d6bb" gracePeriod=30 Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.636874 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-notification-agent" containerID="cri-o://75c5d57e9df2ef9a8214581c0d03e001704f8513bba187d48a30dd7053c20edb" gracePeriod=30 Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.636806 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="sg-core" containerID="cri-o://a44b71df825b9864b648a143841294c85e1531782759b972b33236f674c7629b" gracePeriod=30 Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.660908 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.790702 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:10 crc kubenswrapper[4801]: I1124 21:31:10.791909 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.499458 4801 generic.go:334] "Generic (PLEG): container finished" podID="c659af87-811b-415d-b2e4-06f6b228ec40" containerID="0f400db3db232da24f319e76ccabb1b062d26a984778675019e2ed6fa749edcd" exitCode=0 Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.499496 4801 generic.go:334] "Generic (PLEG): container finished" podID="c659af87-811b-415d-b2e4-06f6b228ec40" containerID="a44b71df825b9864b648a143841294c85e1531782759b972b33236f674c7629b" exitCode=2 Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.499506 4801 generic.go:334] "Generic (PLEG): container finished" podID="c659af87-811b-415d-b2e4-06f6b228ec40" containerID="1761fd7aa3e4422f5ecaa0d46762f6bc6d396846886ba477e6794e816f42d6bb" exitCode=0 Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.499734 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerDied","Data":"0f400db3db232da24f319e76ccabb1b062d26a984778675019e2ed6fa749edcd"} Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.499765 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerDied","Data":"a44b71df825b9864b648a143841294c85e1531782759b972b33236f674c7629b"} Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.499777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerDied","Data":"1761fd7aa3e4422f5ecaa0d46762f6bc6d396846886ba477e6794e816f42d6bb"} Nov 24 21:31:11 crc kubenswrapper[4801]: I1124 21:31:11.909311 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sgmc9" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="registry-server" probeResult="failure" output=< Nov 24 21:31:11 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:31:11 crc kubenswrapper[4801]: > Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.306428 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.328274 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433059 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-scripts\") pod \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433162 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-config-data\") pod \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433322 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-scripts\") pod \"ee263086-3eef-4ad1-903d-d7c18a90028f\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433484 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znlkz\" (UniqueName: \"kubernetes.io/projected/cf84f5e0-9d3c-4023-80f7-e84a6c810221-kube-api-access-znlkz\") pod \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433519 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-combined-ca-bundle\") pod \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\" (UID: \"cf84f5e0-9d3c-4023-80f7-e84a6c810221\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433551 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7xrn\" (UniqueName: \"kubernetes.io/projected/ee263086-3eef-4ad1-903d-d7c18a90028f-kube-api-access-v7xrn\") pod \"ee263086-3eef-4ad1-903d-d7c18a90028f\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433583 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-combined-ca-bundle\") pod \"ee263086-3eef-4ad1-903d-d7c18a90028f\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.433842 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-config-data\") pod \"ee263086-3eef-4ad1-903d-d7c18a90028f\" (UID: \"ee263086-3eef-4ad1-903d-d7c18a90028f\") " Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.453935 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-scripts" (OuterVolumeSpecName: "scripts") pod "cf84f5e0-9d3c-4023-80f7-e84a6c810221" (UID: "cf84f5e0-9d3c-4023-80f7-e84a6c810221"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.461020 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee263086-3eef-4ad1-903d-d7c18a90028f-kube-api-access-v7xrn" (OuterVolumeSpecName: "kube-api-access-v7xrn") pod "ee263086-3eef-4ad1-903d-d7c18a90028f" (UID: "ee263086-3eef-4ad1-903d-d7c18a90028f"). InnerVolumeSpecName "kube-api-access-v7xrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.462237 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-scripts" (OuterVolumeSpecName: "scripts") pod "ee263086-3eef-4ad1-903d-d7c18a90028f" (UID: "ee263086-3eef-4ad1-903d-d7c18a90028f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.468083 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf84f5e0-9d3c-4023-80f7-e84a6c810221-kube-api-access-znlkz" (OuterVolumeSpecName: "kube-api-access-znlkz") pod "cf84f5e0-9d3c-4023-80f7-e84a6c810221" (UID: "cf84f5e0-9d3c-4023-80f7-e84a6c810221"). InnerVolumeSpecName "kube-api-access-znlkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.476489 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-config-data" (OuterVolumeSpecName: "config-data") pod "ee263086-3eef-4ad1-903d-d7c18a90028f" (UID: "ee263086-3eef-4ad1-903d-d7c18a90028f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.491831 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-config-data" (OuterVolumeSpecName: "config-data") pod "cf84f5e0-9d3c-4023-80f7-e84a6c810221" (UID: "cf84f5e0-9d3c-4023-80f7-e84a6c810221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.492496 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee263086-3eef-4ad1-903d-d7c18a90028f" (UID: "ee263086-3eef-4ad1-903d-d7c18a90028f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.529753 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf84f5e0-9d3c-4023-80f7-e84a6c810221" (UID: "cf84f5e0-9d3c-4023-80f7-e84a6c810221"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547685 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547724 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547762 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547774 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547788 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znlkz\" (UniqueName: \"kubernetes.io/projected/cf84f5e0-9d3c-4023-80f7-e84a6c810221-kube-api-access-znlkz\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547802 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf84f5e0-9d3c-4023-80f7-e84a6c810221-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547814 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7xrn\" (UniqueName: \"kubernetes.io/projected/ee263086-3eef-4ad1-903d-d7c18a90028f-kube-api-access-v7xrn\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.547826 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee263086-3eef-4ad1-903d-d7c18a90028f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.559709 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerStarted","Data":"f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135"} Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.579274 4801 generic.go:334] "Generic (PLEG): container finished" podID="c659af87-811b-415d-b2e4-06f6b228ec40" containerID="75c5d57e9df2ef9a8214581c0d03e001704f8513bba187d48a30dd7053c20edb" exitCode=0 Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.579655 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerDied","Data":"75c5d57e9df2ef9a8214581c0d03e001704f8513bba187d48a30dd7053c20edb"} Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.596773 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-prbd8" event={"ID":"ee263086-3eef-4ad1-903d-d7c18a90028f","Type":"ContainerDied","Data":"6993325ed4dc0e3570f60508be035397e21c5141e20084e9d30d73cd66593edb"} Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.596854 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6993325ed4dc0e3570f60508be035397e21c5141e20084e9d30d73cd66593edb" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.596975 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-prbd8" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.630398 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r8zks" event={"ID":"cf84f5e0-9d3c-4023-80f7-e84a6c810221","Type":"ContainerDied","Data":"e07ffcede915dc7d0addeb432bbf4a65e0cd85a9bfe2914b6c1b3b5d4d872991"} Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.630456 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07ffcede915dc7d0addeb432bbf4a65e0cd85a9bfe2914b6c1b3b5d4d872991" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.630548 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r8zks" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.647299 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:31:12 crc kubenswrapper[4801]: E1124 21:31:12.648510 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf84f5e0-9d3c-4023-80f7-e84a6c810221" containerName="nova-cell1-conductor-db-sync" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.648530 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf84f5e0-9d3c-4023-80f7-e84a6c810221" containerName="nova-cell1-conductor-db-sync" Nov 24 21:31:12 crc kubenswrapper[4801]: E1124 21:31:12.648568 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee263086-3eef-4ad1-903d-d7c18a90028f" containerName="nova-manage" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.648589 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee263086-3eef-4ad1-903d-d7c18a90028f" containerName="nova-manage" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.649264 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee263086-3eef-4ad1-903d-d7c18a90028f" containerName="nova-manage" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.649291 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf84f5e0-9d3c-4023-80f7-e84a6c810221" containerName="nova-cell1-conductor-db-sync" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.650739 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.656024 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.659065 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.659098 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.659120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgcxw\" (UniqueName: \"kubernetes.io/projected/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-kube-api-access-wgcxw\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.738567 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.763409 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.763688 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.763777 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgcxw\" (UniqueName: \"kubernetes.io/projected/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-kube-api-access-wgcxw\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.771300 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.772110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.787110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgcxw\" (UniqueName: \"kubernetes.io/projected/b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2-kube-api-access-wgcxw\") pod \"nova-cell1-conductor-0\" (UID: \"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.808631 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.810939 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-log" containerID="cri-o://1c2ffeaab5783823801b1f3e7251892d26113aaad7be9582bc6f4d77436f3855" gracePeriod=30 Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.811647 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-api" containerID="cri-o://3c1e468dcd1d461516e4f8b755b6acc5a33e1ae8f4f4fc88d7bc67e2a38bc316" gracePeriod=30 Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.884954 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.885304 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-log" containerID="cri-o://d179597e01733ae376bfba67b49db9a8876a49cdc2ff8361bfffb41f5808cde3" gracePeriod=30 Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.885917 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-metadata" containerID="cri-o://1498aa8684b489dcc39c032c4b12c843cca093581e55996c963d68f1158dc07b" gracePeriod=30 Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.910100 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:12 crc kubenswrapper[4801]: I1124 21:31:12.925761 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5692084b-8a09-42b5-a3ff-606608cbad05" containerName="nova-scheduler-scheduler" containerID="cri-o://c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a" gracePeriod=30 Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.021865 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.032611 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.078560 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-config-data\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.078630 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-sg-core-conf-yaml\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.078785 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-scripts\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.078891 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxjz\" (UniqueName: \"kubernetes.io/projected/c659af87-811b-415d-b2e4-06f6b228ec40-kube-api-access-xnxjz\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.079006 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-log-httpd\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.079043 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-combined-ca-bundle\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.079094 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-run-httpd\") pod \"c659af87-811b-415d-b2e4-06f6b228ec40\" (UID: \"c659af87-811b-415d-b2e4-06f6b228ec40\") " Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.081603 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.086488 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c659af87-811b-415d-b2e4-06f6b228ec40-kube-api-access-xnxjz" (OuterVolumeSpecName: "kube-api-access-xnxjz") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "kube-api-access-xnxjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.087551 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.098871 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-scripts" (OuterVolumeSpecName: "scripts") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.133714 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.187395 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.187790 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.187801 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnxjz\" (UniqueName: \"kubernetes.io/projected/c659af87-811b-415d-b2e4-06f6b228ec40-kube-api-access-xnxjz\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.187813 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.187823 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c659af87-811b-415d-b2e4-06f6b228ec40-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.273530 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.292518 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.318876 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-config-data" (OuterVolumeSpecName: "config-data") pod "c659af87-811b-415d-b2e4-06f6b228ec40" (UID: "c659af87-811b-415d-b2e4-06f6b228ec40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.395268 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c659af87-811b-415d-b2e4-06f6b228ec40-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.636990 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.645324 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.666111 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.683965 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.684141 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5692084b-8a09-42b5-a3ff-606608cbad05" containerName="nova-scheduler-scheduler" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.687755 4801 generic.go:334] "Generic (PLEG): container finished" podID="19b11296-213d-4d86-8113-152e26f97ed6" containerID="1498aa8684b489dcc39c032c4b12c843cca093581e55996c963d68f1158dc07b" exitCode=0 Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.687801 4801 generic.go:334] "Generic (PLEG): container finished" podID="19b11296-213d-4d86-8113-152e26f97ed6" containerID="d179597e01733ae376bfba67b49db9a8876a49cdc2ff8361bfffb41f5808cde3" exitCode=143 Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.687873 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19b11296-213d-4d86-8113-152e26f97ed6","Type":"ContainerDied","Data":"1498aa8684b489dcc39c032c4b12c843cca093581e55996c963d68f1158dc07b"} Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.687910 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19b11296-213d-4d86-8113-152e26f97ed6","Type":"ContainerDied","Data":"d179597e01733ae376bfba67b49db9a8876a49cdc2ff8361bfffb41f5808cde3"} Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.692042 4801 generic.go:334] "Generic (PLEG): container finished" podID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerID="1c2ffeaab5783823801b1f3e7251892d26113aaad7be9582bc6f4d77436f3855" exitCode=143 Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.692175 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48c521a7-a1e5-47c3-81db-b37354be3c6b","Type":"ContainerDied","Data":"1c2ffeaab5783823801b1f3e7251892d26113aaad7be9582bc6f4d77436f3855"} Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.714958 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2","Type":"ContainerStarted","Data":"87f83dba3fbd1e38520f338db7b159d5dde314d388ea139e954194e84b4784c1"} Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.726975 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c659af87-811b-415d-b2e4-06f6b228ec40","Type":"ContainerDied","Data":"ecaf2bc5596901f7a94ef7a12a78abc7b5d0871ce21d607971bebbe4af4660a1"} Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.727059 4801 scope.go:117] "RemoveContainer" containerID="0f400db3db232da24f319e76ccabb1b062d26a984778675019e2ed6fa749edcd" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.727116 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.832249 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.846850 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.862348 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.863026 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="sg-core" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863047 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="sg-core" Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.863092 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-notification-agent" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863099 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-notification-agent" Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.863139 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-central-agent" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863146 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-central-agent" Nov 24 21:31:13 crc kubenswrapper[4801]: E1124 21:31:13.863153 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="proxy-httpd" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863158 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="proxy-httpd" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863464 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="sg-core" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863488 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="proxy-httpd" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863505 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-central-agent" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.863518 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" containerName="ceilometer-notification-agent" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.867514 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.884592 4801 scope.go:117] "RemoveContainer" containerID="a44b71df825b9864b648a143841294c85e1531782759b972b33236f674c7629b" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.885675 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.885922 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.889434 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.924957 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-run-httpd\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.925056 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-log-httpd\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.925197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-config-data\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.925406 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.925491 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-scripts\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.925616 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:13 crc kubenswrapper[4801]: I1124 21:31:13.925726 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp54r\" (UniqueName: \"kubernetes.io/projected/fd5d6d9d-4009-479e-a7b2-a5c633334729-kube-api-access-dp54r\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.030751 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-log-httpd\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.030965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-config-data\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.031107 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.031187 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-scripts\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.031313 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.031440 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp54r\" (UniqueName: \"kubernetes.io/projected/fd5d6d9d-4009-479e-a7b2-a5c633334729-kube-api-access-dp54r\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.031592 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-run-httpd\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.032333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-run-httpd\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.036135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-log-httpd\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.053878 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-scripts\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.061087 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.062098 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-config-data\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.080067 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.095271 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp54r\" (UniqueName: \"kubernetes.io/projected/fd5d6d9d-4009-479e-a7b2-a5c633334729-kube-api-access-dp54r\") pod \"ceilometer-0\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.159922 4801 scope.go:117] "RemoveContainer" containerID="75c5d57e9df2ef9a8214581c0d03e001704f8513bba187d48a30dd7053c20edb" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.222990 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.231217 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.276411 4801 scope.go:117] "RemoveContainer" containerID="1761fd7aa3e4422f5ecaa0d46762f6bc6d396846886ba477e6794e816f42d6bb" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.344509 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-config-data\") pod \"19b11296-213d-4d86-8113-152e26f97ed6\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.344791 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89nnm\" (UniqueName: \"kubernetes.io/projected/19b11296-213d-4d86-8113-152e26f97ed6-kube-api-access-89nnm\") pod \"19b11296-213d-4d86-8113-152e26f97ed6\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.344826 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b11296-213d-4d86-8113-152e26f97ed6-logs\") pod \"19b11296-213d-4d86-8113-152e26f97ed6\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.344894 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-combined-ca-bundle\") pod \"19b11296-213d-4d86-8113-152e26f97ed6\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.344972 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-nova-metadata-tls-certs\") pod \"19b11296-213d-4d86-8113-152e26f97ed6\" (UID: \"19b11296-213d-4d86-8113-152e26f97ed6\") " Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.349678 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19b11296-213d-4d86-8113-152e26f97ed6-logs" (OuterVolumeSpecName: "logs") pod "19b11296-213d-4d86-8113-152e26f97ed6" (UID: "19b11296-213d-4d86-8113-152e26f97ed6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.370751 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b11296-213d-4d86-8113-152e26f97ed6-kube-api-access-89nnm" (OuterVolumeSpecName: "kube-api-access-89nnm") pod "19b11296-213d-4d86-8113-152e26f97ed6" (UID: "19b11296-213d-4d86-8113-152e26f97ed6"). InnerVolumeSpecName "kube-api-access-89nnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.450503 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89nnm\" (UniqueName: \"kubernetes.io/projected/19b11296-213d-4d86-8113-152e26f97ed6-kube-api-access-89nnm\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.450544 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b11296-213d-4d86-8113-152e26f97ed6-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.509161 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19b11296-213d-4d86-8113-152e26f97ed6" (UID: "19b11296-213d-4d86-8113-152e26f97ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.565619 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.581771 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-config-data" (OuterVolumeSpecName: "config-data") pod "19b11296-213d-4d86-8113-152e26f97ed6" (UID: "19b11296-213d-4d86-8113-152e26f97ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.647581 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "19b11296-213d-4d86-8113-152e26f97ed6" (UID: "19b11296-213d-4d86-8113-152e26f97ed6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.669896 4801 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.670263 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b11296-213d-4d86-8113-152e26f97ed6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.717843 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c659af87-811b-415d-b2e4-06f6b228ec40" path="/var/lib/kubelet/pods/c659af87-811b-415d-b2e4-06f6b228ec40/volumes" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.750345 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2","Type":"ContainerStarted","Data":"29be63574281c9500a1022a4095ededa214887b690efefac17ddc3897bb96a5c"} Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.752079 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.780600 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19b11296-213d-4d86-8113-152e26f97ed6","Type":"ContainerDied","Data":"bee5a8c2d13b1dc6b76044ede03128dd32e60586092afb661423a4c2c0213ab0"} Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.780667 4801 scope.go:117] "RemoveContainer" containerID="1498aa8684b489dcc39c032c4b12c843cca093581e55996c963d68f1158dc07b" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.780768 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.787005 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.786977544 podStartE2EDuration="2.786977544s" podCreationTimestamp="2025-11-24 21:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:14.77008739 +0000 UTC m=+1446.852674060" watchObservedRunningTime="2025-11-24 21:31:14.786977544 +0000 UTC m=+1446.869564214" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.837586 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.863388 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.878404 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.911455 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:14 crc kubenswrapper[4801]: E1124 21:31:14.912245 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-metadata" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.912266 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-metadata" Nov 24 21:31:14 crc kubenswrapper[4801]: E1124 21:31:14.912288 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-log" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.912295 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-log" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.912618 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-metadata" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.912639 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b11296-213d-4d86-8113-152e26f97ed6" containerName="nova-metadata-log" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.914228 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.917170 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.917186 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.935194 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.982589 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.982691 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c2726-7806-4617-91be-669d61e0a8c4-logs\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.982736 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmxq\" (UniqueName: \"kubernetes.io/projected/004c2726-7806-4617-91be-669d61e0a8c4-kube-api-access-blmxq\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.982885 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:14 crc kubenswrapper[4801]: I1124 21:31:14.982928 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-config-data\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.084134 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.084202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-config-data\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.084563 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.084636 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c2726-7806-4617-91be-669d61e0a8c4-logs\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.084680 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmxq\" (UniqueName: \"kubernetes.io/projected/004c2726-7806-4617-91be-669d61e0a8c4-kube-api-access-blmxq\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.086442 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c2726-7806-4617-91be-669d61e0a8c4-logs\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.091297 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-config-data\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.092196 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.095092 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.116777 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmxq\" (UniqueName: \"kubernetes.io/projected/004c2726-7806-4617-91be-669d61e0a8c4-kube-api-access-blmxq\") pod \"nova-metadata-0\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.256298 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.270440 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:15 crc kubenswrapper[4801]: W1124 21:31:15.315686 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5d6d9d_4009_479e_a7b2_a5c633334729.slice/crio-fb43305f7c8f02634cff2b694acb9b123a8873717cd9aea39cdd512edf86a1ba WatchSource:0}: Error finding container fb43305f7c8f02634cff2b694acb9b123a8873717cd9aea39cdd512edf86a1ba: Status 404 returned error can't find the container with id fb43305f7c8f02634cff2b694acb9b123a8873717cd9aea39cdd512edf86a1ba Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.427544 4801 scope.go:117] "RemoveContainer" containerID="d179597e01733ae376bfba67b49db9a8876a49cdc2ff8361bfffb41f5808cde3" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.827216 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerStarted","Data":"fb43305f7c8f02634cff2b694acb9b123a8873717cd9aea39cdd512edf86a1ba"} Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.829162 4801 generic.go:334] "Generic (PLEG): container finished" podID="5692084b-8a09-42b5-a3ff-606608cbad05" containerID="c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a" exitCode=0 Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.829212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5692084b-8a09-42b5-a3ff-606608cbad05","Type":"ContainerDied","Data":"c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a"} Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.855995 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerStarted","Data":"8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8"} Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.856240 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-api" containerID="cri-o://99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07" gracePeriod=30 Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.857077 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-listener" containerID="cri-o://8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8" gracePeriod=30 Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.857141 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-notifier" containerID="cri-o://f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135" gracePeriod=30 Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.857194 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-evaluator" containerID="cri-o://7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3" gracePeriod=30 Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.873464 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:31:15 crc kubenswrapper[4801]: I1124 21:31:15.896713 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.819041712 podStartE2EDuration="9.896688686s" podCreationTimestamp="2025-11-24 21:31:06 +0000 UTC" firstStartedPulling="2025-11-24 21:31:07.368092592 +0000 UTC m=+1439.450679262" lastFinishedPulling="2025-11-24 21:31:15.445739566 +0000 UTC m=+1447.528326236" observedRunningTime="2025-11-24 21:31:15.890781883 +0000 UTC m=+1447.973368553" watchObservedRunningTime="2025-11-24 21:31:15.896688686 +0000 UTC m=+1447.979275356" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.020020 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-config-data\") pod \"5692084b-8a09-42b5-a3ff-606608cbad05\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.020138 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2r5b\" (UniqueName: \"kubernetes.io/projected/5692084b-8a09-42b5-a3ff-606608cbad05-kube-api-access-f2r5b\") pod \"5692084b-8a09-42b5-a3ff-606608cbad05\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.020406 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-combined-ca-bundle\") pod \"5692084b-8a09-42b5-a3ff-606608cbad05\" (UID: \"5692084b-8a09-42b5-a3ff-606608cbad05\") " Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.028594 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5692084b-8a09-42b5-a3ff-606608cbad05-kube-api-access-f2r5b" (OuterVolumeSpecName: "kube-api-access-f2r5b") pod "5692084b-8a09-42b5-a3ff-606608cbad05" (UID: "5692084b-8a09-42b5-a3ff-606608cbad05"). InnerVolumeSpecName "kube-api-access-f2r5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.081803 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.116182 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-config-data" (OuterVolumeSpecName: "config-data") pod "5692084b-8a09-42b5-a3ff-606608cbad05" (UID: "5692084b-8a09-42b5-a3ff-606608cbad05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.120751 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5692084b-8a09-42b5-a3ff-606608cbad05" (UID: "5692084b-8a09-42b5-a3ff-606608cbad05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.130494 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.136035 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5692084b-8a09-42b5-a3ff-606608cbad05-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.136143 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2r5b\" (UniqueName: \"kubernetes.io/projected/5692084b-8a09-42b5-a3ff-606608cbad05-kube-api-access-f2r5b\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.683244 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b11296-213d-4d86-8113-152e26f97ed6" path="/var/lib/kubelet/pods/19b11296-213d-4d86-8113-152e26f97ed6/volumes" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.920782 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerStarted","Data":"0ee415c68bed02def78d109af06db33cc66ee31a87a3daa1173ab794a55725ec"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.929276 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5692084b-8a09-42b5-a3ff-606608cbad05","Type":"ContainerDied","Data":"2acf96eb00eb80846f32a2cdc7a9d477c8dd8adf9cc8b01247e7031b7769fe07"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.929335 4801 scope.go:117] "RemoveContainer" containerID="c43c05cc7a73c5cf591bfafaf326ddf277f46d981dbbdac23b76c9bf62bc337a" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.929530 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.958033 4801 generic.go:334] "Generic (PLEG): container finished" podID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerID="7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3" exitCode=0 Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.958072 4801 generic.go:334] "Generic (PLEG): container finished" podID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerID="99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07" exitCode=0 Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.958122 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerDied","Data":"7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.958159 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerDied","Data":"99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.965043 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"004c2726-7806-4617-91be-669d61e0a8c4","Type":"ContainerStarted","Data":"74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.965100 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"004c2726-7806-4617-91be-669d61e0a8c4","Type":"ContainerStarted","Data":"198d92ffbbc6260a322f36a84f16fe6252735cfeb422ae8f861a10f0d9e40a26"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.986882 4801 generic.go:334] "Generic (PLEG): container finished" podID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerID="3c1e468dcd1d461516e4f8b755b6acc5a33e1ae8f4f4fc88d7bc67e2a38bc316" exitCode=0 Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.988455 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48c521a7-a1e5-47c3-81db-b37354be3c6b","Type":"ContainerDied","Data":"3c1e468dcd1d461516e4f8b755b6acc5a33e1ae8f4f4fc88d7bc67e2a38bc316"} Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.993285 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:16 crc kubenswrapper[4801]: I1124 21:31:16.995707 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.014878 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.028293 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:17 crc kubenswrapper[4801]: E1124 21:31:17.028853 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-log" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.028872 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-log" Nov 24 21:31:17 crc kubenswrapper[4801]: E1124 21:31:17.028906 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-api" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.028912 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-api" Nov 24 21:31:17 crc kubenswrapper[4801]: E1124 21:31:17.028956 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5692084b-8a09-42b5-a3ff-606608cbad05" containerName="nova-scheduler-scheduler" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.028963 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5692084b-8a09-42b5-a3ff-606608cbad05" containerName="nova-scheduler-scheduler" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.029193 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5692084b-8a09-42b5-a3ff-606608cbad05" containerName="nova-scheduler-scheduler" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.029223 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-api" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.029237 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" containerName="nova-api-log" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.030180 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.035699 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.070309 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.119782 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-combined-ca-bundle\") pod \"48c521a7-a1e5-47c3-81db-b37354be3c6b\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.119929 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-config-data\") pod \"48c521a7-a1e5-47c3-81db-b37354be3c6b\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.119957 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhgf\" (UniqueName: \"kubernetes.io/projected/48c521a7-a1e5-47c3-81db-b37354be3c6b-kube-api-access-fxhgf\") pod \"48c521a7-a1e5-47c3-81db-b37354be3c6b\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.120162 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c521a7-a1e5-47c3-81db-b37354be3c6b-logs\") pod \"48c521a7-a1e5-47c3-81db-b37354be3c6b\" (UID: \"48c521a7-a1e5-47c3-81db-b37354be3c6b\") " Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.120610 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddfj\" (UniqueName: \"kubernetes.io/projected/02e0ae28-c649-4f69-95e0-5c8d61ee602c-kube-api-access-tddfj\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.120747 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-config-data\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.120788 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.122206 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c521a7-a1e5-47c3-81db-b37354be3c6b-logs" (OuterVolumeSpecName: "logs") pod "48c521a7-a1e5-47c3-81db-b37354be3c6b" (UID: "48c521a7-a1e5-47c3-81db-b37354be3c6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.133849 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c521a7-a1e5-47c3-81db-b37354be3c6b-kube-api-access-fxhgf" (OuterVolumeSpecName: "kube-api-access-fxhgf") pod "48c521a7-a1e5-47c3-81db-b37354be3c6b" (UID: "48c521a7-a1e5-47c3-81db-b37354be3c6b"). InnerVolumeSpecName "kube-api-access-fxhgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.223416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tddfj\" (UniqueName: \"kubernetes.io/projected/02e0ae28-c649-4f69-95e0-5c8d61ee602c-kube-api-access-tddfj\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.223594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-config-data\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.223630 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.223724 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxhgf\" (UniqueName: \"kubernetes.io/projected/48c521a7-a1e5-47c3-81db-b37354be3c6b-kube-api-access-fxhgf\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.223737 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c521a7-a1e5-47c3-81db-b37354be3c6b-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.241266 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.245280 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-config-data\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.267956 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-config-data" (OuterVolumeSpecName: "config-data") pod "48c521a7-a1e5-47c3-81db-b37354be3c6b" (UID: "48c521a7-a1e5-47c3-81db-b37354be3c6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.281818 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48c521a7-a1e5-47c3-81db-b37354be3c6b" (UID: "48c521a7-a1e5-47c3-81db-b37354be3c6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.292234 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddfj\" (UniqueName: \"kubernetes.io/projected/02e0ae28-c649-4f69-95e0-5c8d61ee602c-kube-api-access-tddfj\") pod \"nova-scheduler-0\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " pod="openstack/nova-scheduler-0" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.326664 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.326710 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c521a7-a1e5-47c3-81db-b37354be3c6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:17 crc kubenswrapper[4801]: I1124 21:31:17.355122 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.001428 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:18 crc kubenswrapper[4801]: W1124 21:31:18.016122 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02e0ae28_c649_4f69_95e0_5c8d61ee602c.slice/crio-6a710e16d4f483c3343bf6594734d07d0258cc43a3f74fb500dfc0f0ca08a247 WatchSource:0}: Error finding container 6a710e16d4f483c3343bf6594734d07d0258cc43a3f74fb500dfc0f0ca08a247: Status 404 returned error can't find the container with id 6a710e16d4f483c3343bf6594734d07d0258cc43a3f74fb500dfc0f0ca08a247 Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.020973 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerStarted","Data":"b567dac09379818479cff783094ffbdc3c48bb6ae117db516f56c0d8d435ebc8"} Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.030158 4801 generic.go:334] "Generic (PLEG): container finished" podID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerID="f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135" exitCode=0 Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.030228 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerDied","Data":"f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135"} Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.035598 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"004c2726-7806-4617-91be-669d61e0a8c4","Type":"ContainerStarted","Data":"0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772"} Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.043116 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48c521a7-a1e5-47c3-81db-b37354be3c6b","Type":"ContainerDied","Data":"c7fb4c752e10def9915ae4aa78ebb4b327d77f7c5a4a8d7dab864a02d9f9ab59"} Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.043152 4801 scope.go:117] "RemoveContainer" containerID="3c1e468dcd1d461516e4f8b755b6acc5a33e1ae8f4f4fc88d7bc67e2a38bc316" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.043250 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.079947 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.079920271 podStartE2EDuration="4.079920271s" podCreationTimestamp="2025-11-24 21:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:18.058487567 +0000 UTC m=+1450.141074237" watchObservedRunningTime="2025-11-24 21:31:18.079920271 +0000 UTC m=+1450.162506941" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.119787 4801 scope.go:117] "RemoveContainer" containerID="1c2ffeaab5783823801b1f3e7251892d26113aaad7be9582bc6f4d77436f3855" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.134945 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.169389 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.189548 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.192334 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.197135 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.197758 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.256622 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zpg\" (UniqueName: \"kubernetes.io/projected/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-kube-api-access-k5zpg\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.256680 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-config-data\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.256861 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.257047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-logs\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.359798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.359909 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-logs\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.360115 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zpg\" (UniqueName: \"kubernetes.io/projected/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-kube-api-access-k5zpg\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.360483 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-logs\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.360521 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-config-data\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.369648 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-config-data\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.369722 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.399543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zpg\" (UniqueName: \"kubernetes.io/projected/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-kube-api-access-k5zpg\") pod \"nova-api-0\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.545327 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.787434 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c521a7-a1e5-47c3-81db-b37354be3c6b" path="/var/lib/kubelet/pods/48c521a7-a1e5-47c3-81db-b37354be3c6b/volumes" Nov 24 21:31:18 crc kubenswrapper[4801]: I1124 21:31:18.788963 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5692084b-8a09-42b5-a3ff-606608cbad05" path="/var/lib/kubelet/pods/5692084b-8a09-42b5-a3ff-606608cbad05/volumes" Nov 24 21:31:19 crc kubenswrapper[4801]: I1124 21:31:19.084179 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02e0ae28-c649-4f69-95e0-5c8d61ee602c","Type":"ContainerStarted","Data":"cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16"} Nov 24 21:31:19 crc kubenswrapper[4801]: I1124 21:31:19.084825 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02e0ae28-c649-4f69-95e0-5c8d61ee602c","Type":"ContainerStarted","Data":"6a710e16d4f483c3343bf6594734d07d0258cc43a3f74fb500dfc0f0ca08a247"} Nov 24 21:31:19 crc kubenswrapper[4801]: I1124 21:31:19.089950 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerStarted","Data":"09dd62b29fcc38faeba1782a0ab7ae1eb4b3caf9052cd56152dd53fe453b4ff2"} Nov 24 21:31:19 crc kubenswrapper[4801]: I1124 21:31:19.113337 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.113308978 podStartE2EDuration="3.113308978s" podCreationTimestamp="2025-11-24 21:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:19.111161512 +0000 UTC m=+1451.193748182" watchObservedRunningTime="2025-11-24 21:31:19.113308978 +0000 UTC m=+1451.195895648" Nov 24 21:31:19 crc kubenswrapper[4801]: I1124 21:31:19.267819 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:19 crc kubenswrapper[4801]: W1124 21:31:19.271402 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a0ed4a_109c_447d_b14e_7e3d8b46f9f6.slice/crio-027bf0db23680c624960f2c2f1819ac89101735c6744b4441f52e1b6650bb524 WatchSource:0}: Error finding container 027bf0db23680c624960f2c2f1819ac89101735c6744b4441f52e1b6650bb524: Status 404 returned error can't find the container with id 027bf0db23680c624960f2c2f1819ac89101735c6744b4441f52e1b6650bb524 Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.105944 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6","Type":"ContainerStarted","Data":"f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0"} Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.106748 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6","Type":"ContainerStarted","Data":"6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f"} Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.106777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6","Type":"ContainerStarted","Data":"027bf0db23680c624960f2c2f1819ac89101735c6744b4441f52e1b6650bb524"} Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.110893 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerStarted","Data":"46301a516ef73affc94e6bcb055c9fdf51393526842b1c486ea6685ed62bfbc4"} Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.111066 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.111087 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="proxy-httpd" containerID="cri-o://46301a516ef73affc94e6bcb055c9fdf51393526842b1c486ea6685ed62bfbc4" gracePeriod=30 Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.111130 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="sg-core" containerID="cri-o://09dd62b29fcc38faeba1782a0ab7ae1eb4b3caf9052cd56152dd53fe453b4ff2" gracePeriod=30 Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.111161 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-notification-agent" containerID="cri-o://b567dac09379818479cff783094ffbdc3c48bb6ae117db516f56c0d8d435ebc8" gracePeriod=30 Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.111058 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-central-agent" containerID="cri-o://0ee415c68bed02def78d109af06db33cc66ee31a87a3daa1173ab794a55725ec" gracePeriod=30 Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.147878 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.147850212 podStartE2EDuration="2.147850212s" podCreationTimestamp="2025-11-24 21:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:20.139811233 +0000 UTC m=+1452.222397903" watchObservedRunningTime="2025-11-24 21:31:20.147850212 +0000 UTC m=+1452.230436882" Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.166299 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.063484207 podStartE2EDuration="7.166264742s" podCreationTimestamp="2025-11-24 21:31:13 +0000 UTC" firstStartedPulling="2025-11-24 21:31:15.321573687 +0000 UTC m=+1447.404160357" lastFinishedPulling="2025-11-24 21:31:19.424354222 +0000 UTC m=+1451.506940892" observedRunningTime="2025-11-24 21:31:20.165848659 +0000 UTC m=+1452.248435329" watchObservedRunningTime="2025-11-24 21:31:20.166264742 +0000 UTC m=+1452.248851442" Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.256819 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.256902 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.854676 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:20 crc kubenswrapper[4801]: I1124 21:31:20.925631 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.108591 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgmc9"] Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.129332 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerID="46301a516ef73affc94e6bcb055c9fdf51393526842b1c486ea6685ed62bfbc4" exitCode=0 Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.129415 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerID="09dd62b29fcc38faeba1782a0ab7ae1eb4b3caf9052cd56152dd53fe453b4ff2" exitCode=2 Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.129425 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerID="b567dac09379818479cff783094ffbdc3c48bb6ae117db516f56c0d8d435ebc8" exitCode=0 Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.129426 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerDied","Data":"46301a516ef73affc94e6bcb055c9fdf51393526842b1c486ea6685ed62bfbc4"} Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.129495 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerDied","Data":"09dd62b29fcc38faeba1782a0ab7ae1eb4b3caf9052cd56152dd53fe453b4ff2"} Nov 24 21:31:21 crc kubenswrapper[4801]: I1124 21:31:21.129515 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerDied","Data":"b567dac09379818479cff783094ffbdc3c48bb6ae117db516f56c0d8d435ebc8"} Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.144441 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sgmc9" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="registry-server" containerID="cri-o://7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc" gracePeriod=2 Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.356187 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.787804 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.823947 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-utilities\") pod \"cc8afac2-234d-4592-b1c8-632359844ce5\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.824483 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rcjr\" (UniqueName: \"kubernetes.io/projected/cc8afac2-234d-4592-b1c8-632359844ce5-kube-api-access-5rcjr\") pod \"cc8afac2-234d-4592-b1c8-632359844ce5\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.824714 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-catalog-content\") pod \"cc8afac2-234d-4592-b1c8-632359844ce5\" (UID: \"cc8afac2-234d-4592-b1c8-632359844ce5\") " Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.824997 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-utilities" (OuterVolumeSpecName: "utilities") pod "cc8afac2-234d-4592-b1c8-632359844ce5" (UID: "cc8afac2-234d-4592-b1c8-632359844ce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.825583 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.837657 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8afac2-234d-4592-b1c8-632359844ce5-kube-api-access-5rcjr" (OuterVolumeSpecName: "kube-api-access-5rcjr") pod "cc8afac2-234d-4592-b1c8-632359844ce5" (UID: "cc8afac2-234d-4592-b1c8-632359844ce5"). InnerVolumeSpecName "kube-api-access-5rcjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.855490 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc8afac2-234d-4592-b1c8-632359844ce5" (UID: "cc8afac2-234d-4592-b1c8-632359844ce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.927951 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8afac2-234d-4592-b1c8-632359844ce5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:22 crc kubenswrapper[4801]: I1124 21:31:22.928007 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rcjr\" (UniqueName: \"kubernetes.io/projected/cc8afac2-234d-4592-b1c8-632359844ce5-kube-api-access-5rcjr\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.075151 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.161626 4801 generic.go:334] "Generic (PLEG): container finished" podID="cc8afac2-234d-4592-b1c8-632359844ce5" containerID="7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc" exitCode=0 Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.161707 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerDied","Data":"7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc"} Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.161753 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgmc9" event={"ID":"cc8afac2-234d-4592-b1c8-632359844ce5","Type":"ContainerDied","Data":"c6b596a298506b2f49a3e921a2debc71c054c5d9276e10dc902c79ff11587d6e"} Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.161786 4801 scope.go:117] "RemoveContainer" containerID="7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.162071 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgmc9" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.198252 4801 scope.go:117] "RemoveContainer" containerID="26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.210035 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgmc9"] Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.220398 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgmc9"] Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.230609 4801 scope.go:117] "RemoveContainer" containerID="b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.295534 4801 scope.go:117] "RemoveContainer" containerID="7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc" Nov 24 21:31:23 crc kubenswrapper[4801]: E1124 21:31:23.296636 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc\": container with ID starting with 7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc not found: ID does not exist" containerID="7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.296697 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc"} err="failed to get container status \"7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc\": rpc error: code = NotFound desc = could not find container \"7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc\": container with ID starting with 7b1995a710a2080bb9fddb8f48cb59520079c9a962cd05505ba4fe3929ad37cc not found: ID does not exist" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.296739 4801 scope.go:117] "RemoveContainer" containerID="26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3" Nov 24 21:31:23 crc kubenswrapper[4801]: E1124 21:31:23.297378 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3\": container with ID starting with 26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3 not found: ID does not exist" containerID="26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.297404 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3"} err="failed to get container status \"26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3\": rpc error: code = NotFound desc = could not find container \"26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3\": container with ID starting with 26eea66210e92dd6331e635130aa437942de9f0321266ac86717517decc01bd3 not found: ID does not exist" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.297421 4801 scope.go:117] "RemoveContainer" containerID="b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365" Nov 24 21:31:23 crc kubenswrapper[4801]: E1124 21:31:23.297801 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365\": container with ID starting with b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365 not found: ID does not exist" containerID="b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365" Nov 24 21:31:23 crc kubenswrapper[4801]: I1124 21:31:23.297827 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365"} err="failed to get container status \"b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365\": rpc error: code = NotFound desc = could not find container \"b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365\": container with ID starting with b94f0601fd9249706c3433162a60499616bddc728a8e4200001eb1c1ba01c365 not found: ID does not exist" Nov 24 21:31:24 crc kubenswrapper[4801]: I1124 21:31:24.319886 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:31:24 crc kubenswrapper[4801]: I1124 21:31:24.321086 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:31:24 crc kubenswrapper[4801]: I1124 21:31:24.681730 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" path="/var/lib/kubelet/pods/cc8afac2-234d-4592-b1c8-632359844ce5/volumes" Nov 24 21:31:25 crc kubenswrapper[4801]: I1124 21:31:25.257238 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:31:25 crc kubenswrapper[4801]: I1124 21:31:25.257319 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.246893 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerID="0ee415c68bed02def78d109af06db33cc66ee31a87a3daa1173ab794a55725ec" exitCode=0 Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.246968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerDied","Data":"0ee415c68bed02def78d109af06db33cc66ee31a87a3daa1173ab794a55725ec"} Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.274544 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.274558 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.560199 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.640563 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-run-httpd\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.640701 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-sg-core-conf-yaml\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.640863 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-config-data\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.641014 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-combined-ca-bundle\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.641159 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-scripts\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.641204 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-log-httpd\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.641302 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp54r\" (UniqueName: \"kubernetes.io/projected/fd5d6d9d-4009-479e-a7b2-a5c633334729-kube-api-access-dp54r\") pod \"fd5d6d9d-4009-479e-a7b2-a5c633334729\" (UID: \"fd5d6d9d-4009-479e-a7b2-a5c633334729\") " Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.642767 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.642863 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.659571 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5d6d9d-4009-479e-a7b2-a5c633334729-kube-api-access-dp54r" (OuterVolumeSpecName: "kube-api-access-dp54r") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "kube-api-access-dp54r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.669717 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-scripts" (OuterVolumeSpecName: "scripts") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.701083 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.746011 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.747051 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.747352 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.747452 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5d6d9d-4009-479e-a7b2-a5c633334729-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.747508 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp54r\" (UniqueName: \"kubernetes.io/projected/fd5d6d9d-4009-479e-a7b2-a5c633334729-kube-api-access-dp54r\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.779182 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.809464 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-config-data" (OuterVolumeSpecName: "config-data") pod "fd5d6d9d-4009-479e-a7b2-a5c633334729" (UID: "fd5d6d9d-4009-479e-a7b2-a5c633334729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.851119 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:26 crc kubenswrapper[4801]: I1124 21:31:26.851170 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5d6d9d-4009-479e-a7b2-a5c633334729-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.289079 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.289523 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5d6d9d-4009-479e-a7b2-a5c633334729","Type":"ContainerDied","Data":"fb43305f7c8f02634cff2b694acb9b123a8873717cd9aea39cdd512edf86a1ba"} Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.289596 4801 scope.go:117] "RemoveContainer" containerID="46301a516ef73affc94e6bcb055c9fdf51393526842b1c486ea6685ed62bfbc4" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.323887 4801 scope.go:117] "RemoveContainer" containerID="09dd62b29fcc38faeba1782a0ab7ae1eb4b3caf9052cd56152dd53fe453b4ff2" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.356559 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.372229 4801 scope.go:117] "RemoveContainer" containerID="b567dac09379818479cff783094ffbdc3c48bb6ae117db516f56c0d8d435ebc8" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.384259 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.391133 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.404389 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405168 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="extract-utilities" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405190 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="extract-utilities" Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405221 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="sg-core" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405229 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="sg-core" Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405242 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="proxy-httpd" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405250 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="proxy-httpd" Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405272 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="registry-server" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405279 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="registry-server" Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405292 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="extract-content" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405302 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="extract-content" Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405332 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-notification-agent" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405341 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-notification-agent" Nov 24 21:31:27 crc kubenswrapper[4801]: E1124 21:31:27.405354 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-central-agent" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405439 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-central-agent" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405729 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="proxy-httpd" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405755 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="sg-core" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405769 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8afac2-234d-4592-b1c8-632359844ce5" containerName="registry-server" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405785 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-notification-agent" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.405799 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" containerName="ceilometer-central-agent" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.408427 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.411709 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.415389 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.417227 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.418309 4801 scope.go:117] "RemoveContainer" containerID="0ee415c68bed02def78d109af06db33cc66ee31a87a3daa1173ab794a55725ec" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.449453 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471269 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-scripts\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471319 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-run-httpd\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471377 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471418 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-log-httpd\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471461 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xtt\" (UniqueName: \"kubernetes.io/projected/df730a9c-7d92-4c62-aabb-85f96b25e85e-kube-api-access-l8xtt\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.471478 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-config-data\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575474 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xtt\" (UniqueName: \"kubernetes.io/projected/df730a9c-7d92-4c62-aabb-85f96b25e85e-kube-api-access-l8xtt\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575551 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-config-data\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575777 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-scripts\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575819 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-run-httpd\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575885 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575923 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.575949 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-log-httpd\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.576695 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-log-httpd\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.576752 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-run-httpd\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.582040 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-scripts\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.582659 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-config-data\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.583518 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.583999 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.601846 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xtt\" (UniqueName: \"kubernetes.io/projected/df730a9c-7d92-4c62-aabb-85f96b25e85e-kube-api-access-l8xtt\") pod \"ceilometer-0\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " pod="openstack/ceilometer-0" Nov 24 21:31:27 crc kubenswrapper[4801]: I1124 21:31:27.745997 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:28 crc kubenswrapper[4801]: I1124 21:31:28.307235 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:28 crc kubenswrapper[4801]: I1124 21:31:28.364778 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:31:28 crc kubenswrapper[4801]: I1124 21:31:28.546188 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:31:28 crc kubenswrapper[4801]: I1124 21:31:28.546523 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:31:28 crc kubenswrapper[4801]: I1124 21:31:28.681909 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5d6d9d-4009-479e-a7b2-a5c633334729" path="/var/lib/kubelet/pods/fd5d6d9d-4009-479e-a7b2-a5c633334729/volumes" Nov 24 21:31:29 crc kubenswrapper[4801]: I1124 21:31:29.324800 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerStarted","Data":"50b9986578d48a4ecfc7f293f93f800f16bf7da8506ddd0bced2b5ba4e8c70a0"} Nov 24 21:31:29 crc kubenswrapper[4801]: I1124 21:31:29.325261 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerStarted","Data":"29c65aaff6f582bb21ce7115d9717c631b285294152d325d2f8dc0f1de01167c"} Nov 24 21:31:29 crc kubenswrapper[4801]: I1124 21:31:29.628518 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:31:29 crc kubenswrapper[4801]: I1124 21:31:29.628664 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:31:30 crc kubenswrapper[4801]: I1124 21:31:30.345670 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerStarted","Data":"2872ffbba724686fa8dc587d0e4b18b8a90f290f67ef07f7df3c49e65d03581a"} Nov 24 21:31:31 crc kubenswrapper[4801]: I1124 21:31:31.362602 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerStarted","Data":"cc2853516e85a2fb1fc278bfa3c2834e398b01cefceb2083fc24bec9e5e71e00"} Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.380315 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerStarted","Data":"2832a85fa4671528a632bec8089aa44ca8e7d5e947b727dd32b7831c56d3d6ac"} Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.381580 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.386570 4801 generic.go:334] "Generic (PLEG): container finished" podID="3c39cc9a-240b-4624-a0cb-33ea4538dcad" containerID="2d46854f650c0a13b4553febefc7a37edcebf2daf2c9d4d39e15235419a2396d" exitCode=137 Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.386624 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c39cc9a-240b-4624-a0cb-33ea4538dcad","Type":"ContainerDied","Data":"2d46854f650c0a13b4553febefc7a37edcebf2daf2c9d4d39e15235419a2396d"} Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.421455 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9703926200000002 podStartE2EDuration="5.421427339s" podCreationTimestamp="2025-11-24 21:31:27 +0000 UTC" firstStartedPulling="2025-11-24 21:31:28.329693776 +0000 UTC m=+1460.412280456" lastFinishedPulling="2025-11-24 21:31:31.780728495 +0000 UTC m=+1463.863315175" observedRunningTime="2025-11-24 21:31:32.409066585 +0000 UTC m=+1464.491653255" watchObservedRunningTime="2025-11-24 21:31:32.421427339 +0000 UTC m=+1464.504014009" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.698154 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.811334 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-combined-ca-bundle\") pod \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.811454 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfr99\" (UniqueName: \"kubernetes.io/projected/3c39cc9a-240b-4624-a0cb-33ea4538dcad-kube-api-access-qfr99\") pod \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.811576 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-config-data\") pod \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\" (UID: \"3c39cc9a-240b-4624-a0cb-33ea4538dcad\") " Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.820480 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c39cc9a-240b-4624-a0cb-33ea4538dcad-kube-api-access-qfr99" (OuterVolumeSpecName: "kube-api-access-qfr99") pod "3c39cc9a-240b-4624-a0cb-33ea4538dcad" (UID: "3c39cc9a-240b-4624-a0cb-33ea4538dcad"). InnerVolumeSpecName "kube-api-access-qfr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.863382 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c39cc9a-240b-4624-a0cb-33ea4538dcad" (UID: "3c39cc9a-240b-4624-a0cb-33ea4538dcad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.864234 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-config-data" (OuterVolumeSpecName: "config-data") pod "3c39cc9a-240b-4624-a0cb-33ea4538dcad" (UID: "3c39cc9a-240b-4624-a0cb-33ea4538dcad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.915762 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.916135 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfr99\" (UniqueName: \"kubernetes.io/projected/3c39cc9a-240b-4624-a0cb-33ea4538dcad-kube-api-access-qfr99\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:32 crc kubenswrapper[4801]: I1124 21:31:32.916148 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c39cc9a-240b-4624-a0cb-33ea4538dcad-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.402123 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c39cc9a-240b-4624-a0cb-33ea4538dcad","Type":"ContainerDied","Data":"66df7401192322d21988d03c59a909136779026f97737a1e84eced81db36fbc3"} Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.402155 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.402193 4801 scope.go:117] "RemoveContainer" containerID="2d46854f650c0a13b4553febefc7a37edcebf2daf2c9d4d39e15235419a2396d" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.472678 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.492575 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.508514 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:31:33 crc kubenswrapper[4801]: E1124 21:31:33.509464 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c39cc9a-240b-4624-a0cb-33ea4538dcad" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.509483 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c39cc9a-240b-4624-a0cb-33ea4538dcad" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.509781 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c39cc9a-240b-4624-a0cb-33ea4538dcad" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.510819 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.527113 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.527436 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.527517 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.540665 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.654504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljpw4\" (UniqueName: \"kubernetes.io/projected/661955ce-b069-43fc-a5de-8f5aaafd0c6d-kube-api-access-ljpw4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.654552 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.654605 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.654653 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.654696 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.758085 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljpw4\" (UniqueName: \"kubernetes.io/projected/661955ce-b069-43fc-a5de-8f5aaafd0c6d-kube-api-access-ljpw4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.758152 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.758230 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.758308 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.758400 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.765824 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.766791 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.766987 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.767411 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661955ce-b069-43fc-a5de-8f5aaafd0c6d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.783838 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljpw4\" (UniqueName: \"kubernetes.io/projected/661955ce-b069-43fc-a5de-8f5aaafd0c6d-kube-api-access-ljpw4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661955ce-b069-43fc-a5de-8f5aaafd0c6d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:33 crc kubenswrapper[4801]: I1124 21:31:33.840637 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:34 crc kubenswrapper[4801]: I1124 21:31:34.596009 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:31:34 crc kubenswrapper[4801]: W1124 21:31:34.602898 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod661955ce_b069_43fc_a5de_8f5aaafd0c6d.slice/crio-972a4480ecfa276d1fb8e1f28eb1320c1ee941e09e849bc3bb352631d5292ecf WatchSource:0}: Error finding container 972a4480ecfa276d1fb8e1f28eb1320c1ee941e09e849bc3bb352631d5292ecf: Status 404 returned error can't find the container with id 972a4480ecfa276d1fb8e1f28eb1320c1ee941e09e849bc3bb352631d5292ecf Nov 24 21:31:34 crc kubenswrapper[4801]: I1124 21:31:34.678786 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c39cc9a-240b-4624-a0cb-33ea4538dcad" path="/var/lib/kubelet/pods/3c39cc9a-240b-4624-a0cb-33ea4538dcad/volumes" Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.268465 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.271806 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.284537 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.478907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"661955ce-b069-43fc-a5de-8f5aaafd0c6d","Type":"ContainerStarted","Data":"9486328e7a24049d40fdd98b56a6bf80101917c06ba1661af135479d74172a1a"} Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.479584 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"661955ce-b069-43fc-a5de-8f5aaafd0c6d","Type":"ContainerStarted","Data":"972a4480ecfa276d1fb8e1f28eb1320c1ee941e09e849bc3bb352631d5292ecf"} Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.494419 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:31:35 crc kubenswrapper[4801]: I1124 21:31:35.513397 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.513350605 podStartE2EDuration="2.513350605s" podCreationTimestamp="2025-11-24 21:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:35.509918128 +0000 UTC m=+1467.592504798" watchObservedRunningTime="2025-11-24 21:31:35.513350605 +0000 UTC m=+1467.595937275" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.531682 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pzs26"] Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.535349 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.546376 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzs26"] Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.647156 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hztpv\" (UniqueName: \"kubernetes.io/projected/4733a18b-b285-49e9-9d1c-a0aad105a642-kube-api-access-hztpv\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.647569 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-catalog-content\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.647646 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-utilities\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.752495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-catalog-content\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.752681 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-utilities\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.752802 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hztpv\" (UniqueName: \"kubernetes.io/projected/4733a18b-b285-49e9-9d1c-a0aad105a642-kube-api-access-hztpv\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.753062 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-catalog-content\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.753580 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-utilities\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.788646 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hztpv\" (UniqueName: \"kubernetes.io/projected/4733a18b-b285-49e9-9d1c-a0aad105a642-kube-api-access-hztpv\") pod \"certified-operators-pzs26\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:36 crc kubenswrapper[4801]: I1124 21:31:36.876106 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:37 crc kubenswrapper[4801]: I1124 21:31:37.417670 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzs26"] Nov 24 21:31:37 crc kubenswrapper[4801]: I1124 21:31:37.525631 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerStarted","Data":"56a98a7ff6ccbf84dd8b5e62d196f7cb7a9825e7a2577239e2a20ff06a3826b3"} Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.542431 4801 generic.go:334] "Generic (PLEG): container finished" podID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerID="156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9" exitCode=0 Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.543140 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerDied","Data":"156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9"} Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.560570 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.561109 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.570672 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.570890 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:31:38 crc kubenswrapper[4801]: I1124 21:31:38.841288 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.555740 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.567275 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.845677 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-58sz2"] Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.849042 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.882016 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-58sz2"] Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.969025 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.969103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.969143 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-config\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.969179 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.969218 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2ncm\" (UniqueName: \"kubernetes.io/projected/15a1163d-2c92-4175-abb9-ef08391fdf5c-kube-api-access-k2ncm\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:39 crc kubenswrapper[4801]: I1124 21:31:39.969497 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.073259 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2ncm\" (UniqueName: \"kubernetes.io/projected/15a1163d-2c92-4175-abb9-ef08391fdf5c-kube-api-access-k2ncm\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.073871 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.074013 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.074097 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.074173 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-config\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.074249 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.074989 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.075261 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.075751 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.076313 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-config\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.076469 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.112250 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2ncm\" (UniqueName: \"kubernetes.io/projected/15a1163d-2c92-4175-abb9-ef08391fdf5c-kube-api-access-k2ncm\") pod \"dnsmasq-dns-6b7bbf7cf9-58sz2\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.193822 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.578848 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerStarted","Data":"302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d"} Nov 24 21:31:40 crc kubenswrapper[4801]: W1124 21:31:40.736120 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a1163d_2c92_4175_abb9_ef08391fdf5c.slice/crio-135b9886db0adaea4197d1469b6e18e5d6976eea096fbb3ba450235dedc8f627 WatchSource:0}: Error finding container 135b9886db0adaea4197d1469b6e18e5d6976eea096fbb3ba450235dedc8f627: Status 404 returned error can't find the container with id 135b9886db0adaea4197d1469b6e18e5d6976eea096fbb3ba450235dedc8f627 Nov 24 21:31:40 crc kubenswrapper[4801]: I1124 21:31:40.747023 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-58sz2"] Nov 24 21:31:41 crc kubenswrapper[4801]: I1124 21:31:41.592949 4801 generic.go:334] "Generic (PLEG): container finished" podID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerID="8c5eba765539f9db5bedc07b3c65dd804dabe0db082e3aabdfca00f61ef5a009" exitCode=0 Nov 24 21:31:41 crc kubenswrapper[4801]: I1124 21:31:41.593038 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" event={"ID":"15a1163d-2c92-4175-abb9-ef08391fdf5c","Type":"ContainerDied","Data":"8c5eba765539f9db5bedc07b3c65dd804dabe0db082e3aabdfca00f61ef5a009"} Nov 24 21:31:41 crc kubenswrapper[4801]: I1124 21:31:41.593433 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" event={"ID":"15a1163d-2c92-4175-abb9-ef08391fdf5c","Type":"ContainerStarted","Data":"135b9886db0adaea4197d1469b6e18e5d6976eea096fbb3ba450235dedc8f627"} Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.608837 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" event={"ID":"15a1163d-2c92-4175-abb9-ef08391fdf5c","Type":"ContainerStarted","Data":"52cf4297114bc46fda3a9c797eb7cc7ac3d5364790bb5e3ea655b416d367d0a1"} Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.609832 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.612025 4801 generic.go:334] "Generic (PLEG): container finished" podID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerID="302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d" exitCode=0 Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.612071 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerDied","Data":"302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d"} Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.649684 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" podStartSLOduration=3.6496549160000002 podStartE2EDuration="3.649654916s" podCreationTimestamp="2025-11-24 21:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:42.634133495 +0000 UTC m=+1474.716720175" watchObservedRunningTime="2025-11-24 21:31:42.649654916 +0000 UTC m=+1474.732241586" Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.934344 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.934774 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-central-agent" containerID="cri-o://50b9986578d48a4ecfc7f293f93f800f16bf7da8506ddd0bced2b5ba4e8c70a0" gracePeriod=30 Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.934966 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="proxy-httpd" containerID="cri-o://2832a85fa4671528a632bec8089aa44ca8e7d5e947b727dd32b7831c56d3d6ac" gracePeriod=30 Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.935208 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="sg-core" containerID="cri-o://cc2853516e85a2fb1fc278bfa3c2834e398b01cefceb2083fc24bec9e5e71e00" gracePeriod=30 Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.935257 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-notification-agent" containerID="cri-o://2872ffbba724686fa8dc587d0e4b18b8a90f290f67ef07f7df3c49e65d03581a" gracePeriod=30 Nov 24 21:31:42 crc kubenswrapper[4801]: I1124 21:31:42.956787 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.254:3000/\": EOF" Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.335991 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.336689 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-log" containerID="cri-o://6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f" gracePeriod=30 Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.336882 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-api" containerID="cri-o://f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0" gracePeriod=30 Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.627250 4801 generic.go:334] "Generic (PLEG): container finished" podID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerID="6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f" exitCode=143 Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.627351 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6","Type":"ContainerDied","Data":"6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f"} Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.632885 4801 generic.go:334] "Generic (PLEG): container finished" podID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerID="2832a85fa4671528a632bec8089aa44ca8e7d5e947b727dd32b7831c56d3d6ac" exitCode=0 Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.632999 4801 generic.go:334] "Generic (PLEG): container finished" podID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerID="cc2853516e85a2fb1fc278bfa3c2834e398b01cefceb2083fc24bec9e5e71e00" exitCode=2 Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.633075 4801 generic.go:334] "Generic (PLEG): container finished" podID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerID="50b9986578d48a4ecfc7f293f93f800f16bf7da8506ddd0bced2b5ba4e8c70a0" exitCode=0 Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.632959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerDied","Data":"2832a85fa4671528a632bec8089aa44ca8e7d5e947b727dd32b7831c56d3d6ac"} Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.633235 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerDied","Data":"cc2853516e85a2fb1fc278bfa3c2834e398b01cefceb2083fc24bec9e5e71e00"} Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.633257 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerDied","Data":"50b9986578d48a4ecfc7f293f93f800f16bf7da8506ddd0bced2b5ba4e8c70a0"} Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.636768 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerStarted","Data":"cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a"} Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.677578 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pzs26" podStartSLOduration=3.130810194 podStartE2EDuration="7.677552963s" podCreationTimestamp="2025-11-24 21:31:36 +0000 UTC" firstStartedPulling="2025-11-24 21:31:38.546328054 +0000 UTC m=+1470.628914724" lastFinishedPulling="2025-11-24 21:31:43.093070823 +0000 UTC m=+1475.175657493" observedRunningTime="2025-11-24 21:31:43.667708877 +0000 UTC m=+1475.750295547" watchObservedRunningTime="2025-11-24 21:31:43.677552963 +0000 UTC m=+1475.760139633" Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.841824 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:43 crc kubenswrapper[4801]: I1124 21:31:43.955441 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.680556 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.889822 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z2j4p"] Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.891589 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.895674 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.895843 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.948127 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvxq\" (UniqueName: \"kubernetes.io/projected/532d5719-89fc-47bc-bab7-9afb76342bf3-kube-api-access-sxvxq\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.948209 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.948304 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-scripts\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.948384 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-config-data\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:44 crc kubenswrapper[4801]: I1124 21:31:44.992420 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z2j4p"] Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.051602 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxvxq\" (UniqueName: \"kubernetes.io/projected/532d5719-89fc-47bc-bab7-9afb76342bf3-kube-api-access-sxvxq\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.051675 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.051749 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-scripts\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.051793 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-config-data\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.058760 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-scripts\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.061995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-config-data\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.065299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.075639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxvxq\" (UniqueName: \"kubernetes.io/projected/532d5719-89fc-47bc-bab7-9afb76342bf3-kube-api-access-sxvxq\") pod \"nova-cell1-cell-mapping-z2j4p\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.246578 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:45 crc kubenswrapper[4801]: I1124 21:31:45.777969 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z2j4p"] Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.553093 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.596630 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-config-data\") pod \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.596845 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4bm\" (UniqueName: \"kubernetes.io/projected/893d7bdd-5169-4d6e-8a02-9a0ac5047308-kube-api-access-vj4bm\") pod \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.596999 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-combined-ca-bundle\") pod \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.597044 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-scripts\") pod \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\" (UID: \"893d7bdd-5169-4d6e-8a02-9a0ac5047308\") " Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.617585 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893d7bdd-5169-4d6e-8a02-9a0ac5047308-kube-api-access-vj4bm" (OuterVolumeSpecName: "kube-api-access-vj4bm") pod "893d7bdd-5169-4d6e-8a02-9a0ac5047308" (UID: "893d7bdd-5169-4d6e-8a02-9a0ac5047308"). InnerVolumeSpecName "kube-api-access-vj4bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.617580 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-scripts" (OuterVolumeSpecName: "scripts") pod "893d7bdd-5169-4d6e-8a02-9a0ac5047308" (UID: "893d7bdd-5169-4d6e-8a02-9a0ac5047308"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.701493 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.701540 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj4bm\" (UniqueName: \"kubernetes.io/projected/893d7bdd-5169-4d6e-8a02-9a0ac5047308-kube-api-access-vj4bm\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.740245 4801 generic.go:334] "Generic (PLEG): container finished" podID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerID="8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8" exitCode=137 Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.740745 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.784492 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z2j4p" podStartSLOduration=2.7844653729999997 podStartE2EDuration="2.784465373s" podCreationTimestamp="2025-11-24 21:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:46.740021986 +0000 UTC m=+1478.822608646" watchObservedRunningTime="2025-11-24 21:31:46.784465373 +0000 UTC m=+1478.867052043" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.784914 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z2j4p" event={"ID":"532d5719-89fc-47bc-bab7-9afb76342bf3","Type":"ContainerStarted","Data":"38f50e7ff5ce3f16b80c107066d3ccc44fd234f3a1200415ea8158d398b251c1"} Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.796181 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z2j4p" event={"ID":"532d5719-89fc-47bc-bab7-9afb76342bf3","Type":"ContainerStarted","Data":"b67bc7a32f4ea9be793705d55d99383fb08bd8452facd9af45cc0b6341beec12"} Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.796213 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerDied","Data":"8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8"} Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.796256 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"893d7bdd-5169-4d6e-8a02-9a0ac5047308","Type":"ContainerDied","Data":"8f3da7a3c11af05baaed914965eb4967148ffe31ebd910768f018f94a7a3136b"} Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.796279 4801 scope.go:117] "RemoveContainer" containerID="8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.880966 4801 scope.go:117] "RemoveContainer" containerID="f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.883945 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.885801 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.896499 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-config-data" (OuterVolumeSpecName: "config-data") pod "893d7bdd-5169-4d6e-8a02-9a0ac5047308" (UID: "893d7bdd-5169-4d6e-8a02-9a0ac5047308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.928701 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.933321 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "893d7bdd-5169-4d6e-8a02-9a0ac5047308" (UID: "893d7bdd-5169-4d6e-8a02-9a0ac5047308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:46 crc kubenswrapper[4801]: I1124 21:31:46.979169 4801 scope.go:117] "RemoveContainer" containerID="7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.032898 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893d7bdd-5169-4d6e-8a02-9a0ac5047308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.060595 4801 scope.go:117] "RemoveContainer" containerID="99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.089532 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.093981 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.119540 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.159437 4801 scope.go:117] "RemoveContainer" containerID="8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.163952 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8\": container with ID starting with 8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8 not found: ID does not exist" containerID="8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.163993 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8"} err="failed to get container status \"8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8\": rpc error: code = NotFound desc = could not find container \"8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8\": container with ID starting with 8c0f665a6ca7bd75281b36334d50a78fe2137e8535a4e870bda468fc8d76d7c8 not found: ID does not exist" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.164071 4801 scope.go:117] "RemoveContainer" containerID="f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.166629 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135\": container with ID starting with f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135 not found: ID does not exist" containerID="f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.166688 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135"} err="failed to get container status \"f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135\": rpc error: code = NotFound desc = could not find container \"f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135\": container with ID starting with f90fa8002ac38d50c66a875cb0b84af6e70d52820ed19264150d3f1de03ee135 not found: ID does not exist" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.166721 4801 scope.go:117] "RemoveContainer" containerID="7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.167114 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3\": container with ID starting with 7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3 not found: ID does not exist" containerID="7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.167209 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3"} err="failed to get container status \"7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3\": rpc error: code = NotFound desc = could not find container \"7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3\": container with ID starting with 7f8483fc0b5f6b40b3cf425f5a9ee51e141061ff4ca6183fff3d8a8c89fd6ee3 not found: ID does not exist" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.167250 4801 scope.go:117] "RemoveContainer" containerID="99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.167855 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07\": container with ID starting with 99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07 not found: ID does not exist" containerID="99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.167904 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07"} err="failed to get container status \"99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07\": rpc error: code = NotFound desc = could not find container \"99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07\": container with ID starting with 99b7b8d978b40e2675f4e4748a205435c79bde027e6b59c3935f95652d4bde07 not found: ID does not exist" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.235809 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.237043 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-api" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237084 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-api" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.237104 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-notifier" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237110 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-notifier" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.237434 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-evaluator" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237444 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-evaluator" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.237495 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-listener" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237503 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-listener" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237856 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-api" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237876 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-evaluator" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237887 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-listener" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.237901 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" containerName="aodh-notifier" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.248510 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.251680 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.252744 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.256095 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5ndmv" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.256188 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.256520 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.286438 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.384433 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-scripts\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.384582 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-internal-tls-certs\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.384636 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-combined-ca-bundle\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.384745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-public-tls-certs\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.384798 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-config-data\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.384847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68cft\" (UniqueName: \"kubernetes.io/projected/88d4ec58-8523-407f-aa48-9f57aedbc143-kube-api-access-68cft\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.487223 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-public-tls-certs\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.487308 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-config-data\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.487384 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68cft\" (UniqueName: \"kubernetes.io/projected/88d4ec58-8523-407f-aa48-9f57aedbc143-kube-api-access-68cft\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.487456 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-scripts\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.487526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-internal-tls-certs\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.487553 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-combined-ca-bundle\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.497588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-internal-tls-certs\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.497645 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-scripts\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.499475 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-public-tls-certs\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.499687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-config-data\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.499990 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-combined-ca-bundle\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.516662 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68cft\" (UniqueName: \"kubernetes.io/projected/88d4ec58-8523-407f-aa48-9f57aedbc143-kube-api-access-68cft\") pod \"aodh-0\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.600485 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.743974 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.780270 4801 generic.go:334] "Generic (PLEG): container finished" podID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerID="f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0" exitCode=0 Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.780330 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6","Type":"ContainerDied","Data":"f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0"} Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.780379 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6","Type":"ContainerDied","Data":"027bf0db23680c624960f2c2f1819ac89101735c6744b4441f52e1b6650bb524"} Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.780398 4801 scope.go:117] "RemoveContainer" containerID="f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.780578 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.789529 4801 generic.go:334] "Generic (PLEG): container finished" podID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerID="2872ffbba724686fa8dc587d0e4b18b8a90f290f67ef07f7df3c49e65d03581a" exitCode=0 Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.790703 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerDied","Data":"2872ffbba724686fa8dc587d0e4b18b8a90f290f67ef07f7df3c49e65d03581a"} Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.804122 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-combined-ca-bundle\") pod \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.804267 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5zpg\" (UniqueName: \"kubernetes.io/projected/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-kube-api-access-k5zpg\") pod \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.805260 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-config-data\") pod \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.805810 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-logs\") pod \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\" (UID: \"44a0ed4a-109c-447d-b14e-7e3d8b46f9f6\") " Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.810585 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-kube-api-access-k5zpg" (OuterVolumeSpecName: "kube-api-access-k5zpg") pod "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" (UID: "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6"). InnerVolumeSpecName "kube-api-access-k5zpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.811702 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-logs" (OuterVolumeSpecName: "logs") pod "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" (UID: "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.876762 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" (UID: "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.879263 4801 scope.go:117] "RemoveContainer" containerID="6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.886542 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-config-data" (OuterVolumeSpecName: "config-data") pod "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" (UID: "44a0ed4a-109c-447d-b14e-7e3d8b46f9f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.911306 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.919023 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.919052 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.919066 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5zpg\" (UniqueName: \"kubernetes.io/projected/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-kube-api-access-k5zpg\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.919079 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.973782 4801 scope.go:117] "RemoveContainer" containerID="f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.974397 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0\": container with ID starting with f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0 not found: ID does not exist" containerID="f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.974432 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0"} err="failed to get container status \"f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0\": rpc error: code = NotFound desc = could not find container \"f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0\": container with ID starting with f841d19f18801b6eda89bafe394aeedcd656000782fcff45358a1d3c1a0cdae0 not found: ID does not exist" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.974456 4801 scope.go:117] "RemoveContainer" containerID="6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f" Nov 24 21:31:47 crc kubenswrapper[4801]: E1124 21:31:47.974750 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f\": container with ID starting with 6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f not found: ID does not exist" containerID="6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f" Nov 24 21:31:47 crc kubenswrapper[4801]: I1124 21:31:47.974795 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f"} err="failed to get container status \"6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f\": rpc error: code = NotFound desc = could not find container \"6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f\": container with ID starting with 6127de06fbe49897da91be51c34fc9b5d0033aab394a51223e35bbd4e26cfa9f not found: ID does not exist" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.021393 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-log-httpd\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.021909 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-config-data\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022026 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-scripts\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022112 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-run-httpd\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022219 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-combined-ca-bundle\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022394 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-sg-core-conf-yaml\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022486 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8xtt\" (UniqueName: \"kubernetes.io/projected/df730a9c-7d92-4c62-aabb-85f96b25e85e-kube-api-access-l8xtt\") pod \"df730a9c-7d92-4c62-aabb-85f96b25e85e\" (UID: \"df730a9c-7d92-4c62-aabb-85f96b25e85e\") " Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022500 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.022748 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.027567 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df730a9c-7d92-4c62-aabb-85f96b25e85e-kube-api-access-l8xtt" (OuterVolumeSpecName: "kube-api-access-l8xtt") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "kube-api-access-l8xtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.038626 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-scripts" (OuterVolumeSpecName: "scripts") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.069390 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.125894 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.126004 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.126021 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.126032 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8xtt\" (UniqueName: \"kubernetes.io/projected/df730a9c-7d92-4c62-aabb-85f96b25e85e-kube-api-access-l8xtt\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.126043 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df730a9c-7d92-4c62-aabb-85f96b25e85e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.199006 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.203606 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-config-data" (OuterVolumeSpecName: "config-data") pod "df730a9c-7d92-4c62-aabb-85f96b25e85e" (UID: "df730a9c-7d92-4c62-aabb-85f96b25e85e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.229211 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.229279 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df730a9c-7d92-4c62-aabb-85f96b25e85e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.330277 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a0ed4a_109c_447d_b14e_7e3d8b46f9f6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.350894 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.387922 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.417155 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.431418 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.432485 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-api" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432511 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-api" Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.432535 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="proxy-httpd" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432546 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="proxy-httpd" Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.432566 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-central-agent" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432573 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-central-agent" Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.432605 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-notification-agent" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432615 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-notification-agent" Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.432627 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-log" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432635 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-log" Nov 24 21:31:48 crc kubenswrapper[4801]: E1124 21:31:48.432668 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="sg-core" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432677 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="sg-core" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432924 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="proxy-httpd" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432951 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-notification-agent" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432972 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-log" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432987 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="sg-core" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.432997 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" containerName="nova-api-api" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.433012 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" containerName="ceilometer-central-agent" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.434653 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.437061 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.437306 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.439936 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.475253 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.545062 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-public-tls-certs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.545125 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/fbf332cd-a5a6-47e0-aae7-f4a93e446680-kube-api-access-tgzb8\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.545312 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-config-data\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.545399 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.545436 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf332cd-a5a6-47e0-aae7-f4a93e446680-logs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.545487 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.648438 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.648538 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-public-tls-certs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.648576 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/fbf332cd-a5a6-47e0-aae7-f4a93e446680-kube-api-access-tgzb8\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.648680 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-config-data\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.648747 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.648774 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf332cd-a5a6-47e0-aae7-f4a93e446680-logs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.649188 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf332cd-a5a6-47e0-aae7-f4a93e446680-logs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.656127 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.656288 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.656598 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-public-tls-certs\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.657850 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-config-data\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.669408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/fbf332cd-a5a6-47e0-aae7-f4a93e446680-kube-api-access-tgzb8\") pod \"nova-api-0\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.685898 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a0ed4a-109c-447d-b14e-7e3d8b46f9f6" path="/var/lib/kubelet/pods/44a0ed4a-109c-447d-b14e-7e3d8b46f9f6/volumes" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.686798 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893d7bdd-5169-4d6e-8a02-9a0ac5047308" path="/var/lib/kubelet/pods/893d7bdd-5169-4d6e-8a02-9a0ac5047308/volumes" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.763261 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.810339 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df730a9c-7d92-4c62-aabb-85f96b25e85e","Type":"ContainerDied","Data":"29c65aaff6f582bb21ce7115d9717c631b285294152d325d2f8dc0f1de01167c"} Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.811294 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.811438 4801 scope.go:117] "RemoveContainer" containerID="2832a85fa4671528a632bec8089aa44ca8e7d5e947b727dd32b7831c56d3d6ac" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.813527 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerStarted","Data":"61ebdb3fe94726d59f382ae0c048847313c78899d8b7a520d2235769233c3c00"} Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.842289 4801 scope.go:117] "RemoveContainer" containerID="cc2853516e85a2fb1fc278bfa3c2834e398b01cefceb2083fc24bec9e5e71e00" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.912005 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.963148 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:48 crc kubenswrapper[4801]: I1124 21:31:48.984077 4801 scope.go:117] "RemoveContainer" containerID="2872ffbba724686fa8dc587d0e4b18b8a90f290f67ef07f7df3c49e65d03581a" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.018641 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.045499 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.074400 4801 scope.go:117] "RemoveContainer" containerID="50b9986578d48a4ecfc7f293f93f800f16bf7da8506ddd0bced2b5ba4e8c70a0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.078712 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.078880 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.091130 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.091223 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.129375 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzs26"] Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.211743 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-run-httpd\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.211831 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-config-data\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.212122 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-scripts\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.213546 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.213698 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.213794 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-log-httpd\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.214219 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5cs\" (UniqueName: \"kubernetes.io/projected/e6d6d840-7596-415b-a988-b6980ad6cd3e-kube-api-access-wm5cs\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.326239 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5cs\" (UniqueName: \"kubernetes.io/projected/e6d6d840-7596-415b-a988-b6980ad6cd3e-kube-api-access-wm5cs\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.326866 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-run-httpd\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.326888 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-config-data\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.326957 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-scripts\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.327014 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.327040 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.327071 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-log-httpd\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.327903 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-log-httpd\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.328610 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-run-httpd\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.353387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.355020 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-scripts\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.356468 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-config-data\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.360565 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5cs\" (UniqueName: \"kubernetes.io/projected/e6d6d840-7596-415b-a988-b6980ad6cd3e-kube-api-access-wm5cs\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.383758 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.446151 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.543643 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.904794 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerStarted","Data":"ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7"} Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.907790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbf332cd-a5a6-47e0-aae7-f4a93e446680","Type":"ContainerStarted","Data":"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed"} Nov 24 21:31:49 crc kubenswrapper[4801]: I1124 21:31:49.907851 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbf332cd-a5a6-47e0-aae7-f4a93e446680","Type":"ContainerStarted","Data":"032ff5b579d1d323b51c91fbf9df7fd0486dec7a1fc1fc727dd62ab6ca1ca8ab"} Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.192140 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.194521 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.296600 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-9c2vs"] Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.296891 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" podUID="1915439f-0e90-491e-8949-f895b1483935" containerName="dnsmasq-dns" containerID="cri-o://7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a" gracePeriod=10 Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.699035 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df730a9c-7d92-4c62-aabb-85f96b25e85e" path="/var/lib/kubelet/pods/df730a9c-7d92-4c62-aabb-85f96b25e85e/volumes" Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.884249 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.982405 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbf332cd-a5a6-47e0-aae7-f4a93e446680","Type":"ContainerStarted","Data":"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef"} Nov 24 21:31:50 crc kubenswrapper[4801]: I1124 21:31:50.986777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerStarted","Data":"d24ebc3ab692d026d868191dc9de2076aa8b845dc71c7b1c76aae09627029bd4"} Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.004745 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerStarted","Data":"74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e"} Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.015195 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.015474 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-nb\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.015525 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-sb\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.015636 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-config\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.015696 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-svc\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.015756 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t2tq\" (UniqueName: \"kubernetes.io/projected/1915439f-0e90-491e-8949-f895b1483935-kube-api-access-7t2tq\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.021388 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.021343836 podStartE2EDuration="3.021343836s" podCreationTimestamp="2025-11-24 21:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:51.003798412 +0000 UTC m=+1483.086385072" watchObservedRunningTime="2025-11-24 21:31:51.021343836 +0000 UTC m=+1483.103930506" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.023077 4801 generic.go:334] "Generic (PLEG): container finished" podID="1915439f-0e90-491e-8949-f895b1483935" containerID="7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a" exitCode=0 Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.023725 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pzs26" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="registry-server" containerID="cri-o://cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a" gracePeriod=2 Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.024341 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.026569 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" event={"ID":"1915439f-0e90-491e-8949-f895b1483935","Type":"ContainerDied","Data":"7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a"} Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.026638 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-9c2vs" event={"ID":"1915439f-0e90-491e-8949-f895b1483935","Type":"ContainerDied","Data":"c77049ef29118c8840d8abbf79b3b9d09f36520f20e791be3dbef7c6f65394e2"} Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.026660 4801 scope.go:117] "RemoveContainer" containerID="7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.026779 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1915439f-0e90-491e-8949-f895b1483935-kube-api-access-7t2tq" (OuterVolumeSpecName: "kube-api-access-7t2tq") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "kube-api-access-7t2tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.099669 4801 scope.go:117] "RemoveContainer" containerID="d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.119579 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t2tq\" (UniqueName: \"kubernetes.io/projected/1915439f-0e90-491e-8949-f895b1483935-kube-api-access-7t2tq\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.191385 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.212619 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.220837 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.221493 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0\") pod \"1915439f-0e90-491e-8949-f895b1483935\" (UID: \"1915439f-0e90-491e-8949-f895b1483935\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.222132 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.222153 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: W1124 21:31:51.222250 4801 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1915439f-0e90-491e-8949-f895b1483935/volumes/kubernetes.io~configmap/dns-swift-storage-0 Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.222269 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.228717 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-config" (OuterVolumeSpecName: "config") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.255969 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1915439f-0e90-491e-8949-f895b1483935" (UID: "1915439f-0e90-491e-8949-f895b1483935"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.328441 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.328479 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.328492 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1915439f-0e90-491e-8949-f895b1483935-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.539490 4801 scope.go:117] "RemoveContainer" containerID="7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a" Nov 24 21:31:51 crc kubenswrapper[4801]: E1124 21:31:51.540122 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a\": container with ID starting with 7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a not found: ID does not exist" containerID="7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.540155 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a"} err="failed to get container status \"7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a\": rpc error: code = NotFound desc = could not find container \"7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a\": container with ID starting with 7b808f2aa5b711440c20a46f4cfd710ffc0708235de7883da6f3a156e7cc955a not found: ID does not exist" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.540179 4801 scope.go:117] "RemoveContainer" containerID="d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761" Nov 24 21:31:51 crc kubenswrapper[4801]: E1124 21:31:51.541055 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761\": container with ID starting with d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761 not found: ID does not exist" containerID="d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.541080 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761"} err="failed to get container status \"d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761\": rpc error: code = NotFound desc = could not find container \"d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761\": container with ID starting with d71ab9472efff88d260228e15b464de380d3f7fb02a03be0edef32cd197a8761 not found: ID does not exist" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.552294 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-9c2vs"] Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.568564 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-9c2vs"] Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.621124 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.744297 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hztpv\" (UniqueName: \"kubernetes.io/projected/4733a18b-b285-49e9-9d1c-a0aad105a642-kube-api-access-hztpv\") pod \"4733a18b-b285-49e9-9d1c-a0aad105a642\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.744480 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-catalog-content\") pod \"4733a18b-b285-49e9-9d1c-a0aad105a642\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.744700 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-utilities\") pod \"4733a18b-b285-49e9-9d1c-a0aad105a642\" (UID: \"4733a18b-b285-49e9-9d1c-a0aad105a642\") " Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.755288 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-utilities" (OuterVolumeSpecName: "utilities") pod "4733a18b-b285-49e9-9d1c-a0aad105a642" (UID: "4733a18b-b285-49e9-9d1c-a0aad105a642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.774654 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.776706 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4733a18b-b285-49e9-9d1c-a0aad105a642-kube-api-access-hztpv" (OuterVolumeSpecName: "kube-api-access-hztpv") pod "4733a18b-b285-49e9-9d1c-a0aad105a642" (UID: "4733a18b-b285-49e9-9d1c-a0aad105a642"). InnerVolumeSpecName "kube-api-access-hztpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.835731 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4733a18b-b285-49e9-9d1c-a0aad105a642" (UID: "4733a18b-b285-49e9-9d1c-a0aad105a642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.878170 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hztpv\" (UniqueName: \"kubernetes.io/projected/4733a18b-b285-49e9-9d1c-a0aad105a642-kube-api-access-hztpv\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:51 crc kubenswrapper[4801]: I1124 21:31:51.878212 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4733a18b-b285-49e9-9d1c-a0aad105a642-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.046498 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerStarted","Data":"aefe58858a235a79c536db83fd99cc54f99ecda07a635d66782b89f3fde08282"} Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.046868 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerStarted","Data":"9f14d2aaba6b834de66e844fcb38f653d246287e1a41376eddaacadd1f5474b3"} Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.049696 4801 generic.go:334] "Generic (PLEG): container finished" podID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerID="cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a" exitCode=0 Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.049751 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerDied","Data":"cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a"} Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.049789 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzs26" event={"ID":"4733a18b-b285-49e9-9d1c-a0aad105a642","Type":"ContainerDied","Data":"56a98a7ff6ccbf84dd8b5e62d196f7cb7a9825e7a2577239e2a20ff06a3826b3"} Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.049811 4801 scope.go:117] "RemoveContainer" containerID="cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.050027 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzs26" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.077440 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerStarted","Data":"96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1"} Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.156606 4801 scope.go:117] "RemoveContainer" containerID="302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.171212 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzs26"] Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.203877 4801 scope.go:117] "RemoveContainer" containerID="156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.206029 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pzs26"] Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.262670 4801 scope.go:117] "RemoveContainer" containerID="cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a" Nov 24 21:31:52 crc kubenswrapper[4801]: E1124 21:31:52.264098 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a\": container with ID starting with cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a not found: ID does not exist" containerID="cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.264149 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a"} err="failed to get container status \"cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a\": rpc error: code = NotFound desc = could not find container \"cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a\": container with ID starting with cffc205bdae86c0de63156682fc753eedbc8c15866e96fe9551144ed2cb61f1a not found: ID does not exist" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.264196 4801 scope.go:117] "RemoveContainer" containerID="302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d" Nov 24 21:31:52 crc kubenswrapper[4801]: E1124 21:31:52.264846 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d\": container with ID starting with 302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d not found: ID does not exist" containerID="302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.264883 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d"} err="failed to get container status \"302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d\": rpc error: code = NotFound desc = could not find container \"302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d\": container with ID starting with 302cf6288c48b1b14afe6a93c2356da8d30ad304581036e00d6019b14609bd5d not found: ID does not exist" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.264930 4801 scope.go:117] "RemoveContainer" containerID="156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9" Nov 24 21:31:52 crc kubenswrapper[4801]: E1124 21:31:52.266678 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9\": container with ID starting with 156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9 not found: ID does not exist" containerID="156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.266714 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9"} err="failed to get container status \"156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9\": rpc error: code = NotFound desc = could not find container \"156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9\": container with ID starting with 156984e90eeff8b424b7a2674d5c964cbd77b81da27c9c2b8600b685d7f07ef9 not found: ID does not exist" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.678833 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1915439f-0e90-491e-8949-f895b1483935" path="/var/lib/kubelet/pods/1915439f-0e90-491e-8949-f895b1483935/volumes" Nov 24 21:31:52 crc kubenswrapper[4801]: I1124 21:31:52.680348 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" path="/var/lib/kubelet/pods/4733a18b-b285-49e9-9d1c-a0aad105a642/volumes" Nov 24 21:31:53 crc kubenswrapper[4801]: I1124 21:31:53.114915 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerStarted","Data":"79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4"} Nov 24 21:31:53 crc kubenswrapper[4801]: I1124 21:31:53.130430 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerStarted","Data":"1998ebf6d75f12234c1cd99e4e0ad5e985d38a794c7c6429c349c4af7a7d0806"} Nov 24 21:31:53 crc kubenswrapper[4801]: I1124 21:31:53.154387 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.28159472 podStartE2EDuration="6.154330823s" podCreationTimestamp="2025-11-24 21:31:47 +0000 UTC" firstStartedPulling="2025-11-24 21:31:48.418233754 +0000 UTC m=+1480.500820424" lastFinishedPulling="2025-11-24 21:31:52.290969857 +0000 UTC m=+1484.373556527" observedRunningTime="2025-11-24 21:31:53.146038856 +0000 UTC m=+1485.228625526" watchObservedRunningTime="2025-11-24 21:31:53.154330823 +0000 UTC m=+1485.236917493" Nov 24 21:31:54 crc kubenswrapper[4801]: I1124 21:31:54.319719 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:31:54 crc kubenswrapper[4801]: I1124 21:31:54.320294 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:31:55 crc kubenswrapper[4801]: I1124 21:31:55.155930 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerStarted","Data":"242236f6246faa7ff2807aadcd27170a4f9b2380436e4933907ab849ea46e8b3"} Nov 24 21:31:55 crc kubenswrapper[4801]: I1124 21:31:55.156658 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:31:55 crc kubenswrapper[4801]: I1124 21:31:55.157323 4801 generic.go:334] "Generic (PLEG): container finished" podID="532d5719-89fc-47bc-bab7-9afb76342bf3" containerID="38f50e7ff5ce3f16b80c107066d3ccc44fd234f3a1200415ea8158d398b251c1" exitCode=0 Nov 24 21:31:55 crc kubenswrapper[4801]: I1124 21:31:55.157399 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z2j4p" event={"ID":"532d5719-89fc-47bc-bab7-9afb76342bf3","Type":"ContainerDied","Data":"38f50e7ff5ce3f16b80c107066d3ccc44fd234f3a1200415ea8158d398b251c1"} Nov 24 21:31:55 crc kubenswrapper[4801]: I1124 21:31:55.204491 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.276039002 podStartE2EDuration="7.204455081s" podCreationTimestamp="2025-11-24 21:31:48 +0000 UTC" firstStartedPulling="2025-11-24 21:31:50.239791316 +0000 UTC m=+1482.322377976" lastFinishedPulling="2025-11-24 21:31:54.168207385 +0000 UTC m=+1486.250794055" observedRunningTime="2025-11-24 21:31:55.19796071 +0000 UTC m=+1487.280547380" watchObservedRunningTime="2025-11-24 21:31:55.204455081 +0000 UTC m=+1487.287041781" Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.734851 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.931876 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-config-data\") pod \"532d5719-89fc-47bc-bab7-9afb76342bf3\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.932429 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-combined-ca-bundle\") pod \"532d5719-89fc-47bc-bab7-9afb76342bf3\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.932801 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxvxq\" (UniqueName: \"kubernetes.io/projected/532d5719-89fc-47bc-bab7-9afb76342bf3-kube-api-access-sxvxq\") pod \"532d5719-89fc-47bc-bab7-9afb76342bf3\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.933124 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-scripts\") pod \"532d5719-89fc-47bc-bab7-9afb76342bf3\" (UID: \"532d5719-89fc-47bc-bab7-9afb76342bf3\") " Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.944717 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-scripts" (OuterVolumeSpecName: "scripts") pod "532d5719-89fc-47bc-bab7-9afb76342bf3" (UID: "532d5719-89fc-47bc-bab7-9afb76342bf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.945739 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532d5719-89fc-47bc-bab7-9afb76342bf3-kube-api-access-sxvxq" (OuterVolumeSpecName: "kube-api-access-sxvxq") pod "532d5719-89fc-47bc-bab7-9afb76342bf3" (UID: "532d5719-89fc-47bc-bab7-9afb76342bf3"). InnerVolumeSpecName "kube-api-access-sxvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.988910 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-config-data" (OuterVolumeSpecName: "config-data") pod "532d5719-89fc-47bc-bab7-9afb76342bf3" (UID: "532d5719-89fc-47bc-bab7-9afb76342bf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:56 crc kubenswrapper[4801]: I1124 21:31:56.989410 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "532d5719-89fc-47bc-bab7-9afb76342bf3" (UID: "532d5719-89fc-47bc-bab7-9afb76342bf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.035796 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.035845 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxvxq\" (UniqueName: \"kubernetes.io/projected/532d5719-89fc-47bc-bab7-9afb76342bf3-kube-api-access-sxvxq\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.035860 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.035870 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532d5719-89fc-47bc-bab7-9afb76342bf3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.183482 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z2j4p" event={"ID":"532d5719-89fc-47bc-bab7-9afb76342bf3","Type":"ContainerDied","Data":"b67bc7a32f4ea9be793705d55d99383fb08bd8452facd9af45cc0b6341beec12"} Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.183537 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b67bc7a32f4ea9be793705d55d99383fb08bd8452facd9af45cc0b6341beec12" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.183570 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z2j4p" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.349156 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.349496 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-log" containerID="cri-o://0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed" gracePeriod=30 Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.349697 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-api" containerID="cri-o://31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef" gracePeriod=30 Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.366608 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.366867 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" containerName="nova-scheduler-scheduler" containerID="cri-o://cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" gracePeriod=30 Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.418280 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.418913 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-log" containerID="cri-o://74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04" gracePeriod=30 Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.419593 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-metadata" containerID="cri-o://0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772" gracePeriod=30 Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.497594 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n9d8v"] Nov 24 21:31:57 crc kubenswrapper[4801]: E1124 21:31:57.498256 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="registry-server" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498276 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="registry-server" Nov 24 21:31:57 crc kubenswrapper[4801]: E1124 21:31:57.498324 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="extract-utilities" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498333 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="extract-utilities" Nov 24 21:31:57 crc kubenswrapper[4801]: E1124 21:31:57.498344 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532d5719-89fc-47bc-bab7-9afb76342bf3" containerName="nova-manage" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498352 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="532d5719-89fc-47bc-bab7-9afb76342bf3" containerName="nova-manage" Nov 24 21:31:57 crc kubenswrapper[4801]: E1124 21:31:57.498397 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1915439f-0e90-491e-8949-f895b1483935" containerName="dnsmasq-dns" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498404 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1915439f-0e90-491e-8949-f895b1483935" containerName="dnsmasq-dns" Nov 24 21:31:57 crc kubenswrapper[4801]: E1124 21:31:57.498417 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1915439f-0e90-491e-8949-f895b1483935" containerName="init" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498425 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1915439f-0e90-491e-8949-f895b1483935" containerName="init" Nov 24 21:31:57 crc kubenswrapper[4801]: E1124 21:31:57.498437 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="extract-content" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498444 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="extract-content" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498709 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1915439f-0e90-491e-8949-f895b1483935" containerName="dnsmasq-dns" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498729 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="532d5719-89fc-47bc-bab7-9afb76342bf3" containerName="nova-manage" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.498746 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4733a18b-b285-49e9-9d1c-a0aad105a642" containerName="registry-server" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.500701 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.546524 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9d8v"] Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.561563 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2pw\" (UniqueName: \"kubernetes.io/projected/57a85a02-257b-4a96-9c57-1bee074d2c30-kube-api-access-dk2pw\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.562069 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-utilities\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.562337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-catalog-content\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.665635 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-catalog-content\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.665756 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk2pw\" (UniqueName: \"kubernetes.io/projected/57a85a02-257b-4a96-9c57-1bee074d2c30-kube-api-access-dk2pw\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.665893 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-utilities\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.666432 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-utilities\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.668767 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-catalog-content\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.700354 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk2pw\" (UniqueName: \"kubernetes.io/projected/57a85a02-257b-4a96-9c57-1bee074d2c30-kube-api-access-dk2pw\") pod \"community-operators-n9d8v\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:57 crc kubenswrapper[4801]: I1124 21:31:57.839358 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.249742 4801 generic.go:334] "Generic (PLEG): container finished" podID="004c2726-7806-4617-91be-669d61e0a8c4" containerID="74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04" exitCode=143 Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.250359 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"004c2726-7806-4617-91be-669d61e0a8c4","Type":"ContainerDied","Data":"74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04"} Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.250524 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.293110 4801 generic.go:334] "Generic (PLEG): container finished" podID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerID="31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef" exitCode=0 Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.293150 4801 generic.go:334] "Generic (PLEG): container finished" podID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerID="0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed" exitCode=143 Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.293181 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbf332cd-a5a6-47e0-aae7-f4a93e446680","Type":"ContainerDied","Data":"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef"} Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.293219 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbf332cd-a5a6-47e0-aae7-f4a93e446680","Type":"ContainerDied","Data":"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed"} Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.293234 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbf332cd-a5a6-47e0-aae7-f4a93e446680","Type":"ContainerDied","Data":"032ff5b579d1d323b51c91fbf9df7fd0486dec7a1fc1fc727dd62ab6ca1ca8ab"} Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.293252 4801 scope.go:117] "RemoveContainer" containerID="31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.370057 4801 scope.go:117] "RemoveContainer" containerID="0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.396876 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf332cd-a5a6-47e0-aae7-f4a93e446680-logs\") pod \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.397132 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-config-data\") pod \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.397304 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-combined-ca-bundle\") pod \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.397476 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/fbf332cd-a5a6-47e0-aae7-f4a93e446680-kube-api-access-tgzb8\") pod \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.397523 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-public-tls-certs\") pod \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.397589 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-internal-tls-certs\") pod \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\" (UID: \"fbf332cd-a5a6-47e0-aae7-f4a93e446680\") " Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.399426 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbf332cd-a5a6-47e0-aae7-f4a93e446680-logs" (OuterVolumeSpecName: "logs") pod "fbf332cd-a5a6-47e0-aae7-f4a93e446680" (UID: "fbf332cd-a5a6-47e0-aae7-f4a93e446680"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.414887 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf332cd-a5a6-47e0-aae7-f4a93e446680-kube-api-access-tgzb8" (OuterVolumeSpecName: "kube-api-access-tgzb8") pod "fbf332cd-a5a6-47e0-aae7-f4a93e446680" (UID: "fbf332cd-a5a6-47e0-aae7-f4a93e446680"). InnerVolumeSpecName "kube-api-access-tgzb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.429877 4801 scope.go:117] "RemoveContainer" containerID="31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef" Nov 24 21:31:58 crc kubenswrapper[4801]: E1124 21:31:58.433494 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef\": container with ID starting with 31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef not found: ID does not exist" containerID="31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.433649 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef"} err="failed to get container status \"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef\": rpc error: code = NotFound desc = could not find container \"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef\": container with ID starting with 31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef not found: ID does not exist" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.433735 4801 scope.go:117] "RemoveContainer" containerID="0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed" Nov 24 21:31:58 crc kubenswrapper[4801]: E1124 21:31:58.444820 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed\": container with ID starting with 0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed not found: ID does not exist" containerID="0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.444903 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed"} err="failed to get container status \"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed\": rpc error: code = NotFound desc = could not find container \"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed\": container with ID starting with 0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed not found: ID does not exist" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.444946 4801 scope.go:117] "RemoveContainer" containerID="31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.447874 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef"} err="failed to get container status \"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef\": rpc error: code = NotFound desc = could not find container \"31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef\": container with ID starting with 31c46ad490deffdfec825c21d664f953cb23eec2f63326886a2e45daca2991ef not found: ID does not exist" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.447918 4801 scope.go:117] "RemoveContainer" containerID="0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.453808 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed"} err="failed to get container status \"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed\": rpc error: code = NotFound desc = could not find container \"0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed\": container with ID starting with 0d9e12f5b4ed7c0274c47d8e56e9d5c4f85639f00f7284f84a726f27e68d95ed not found: ID does not exist" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.466201 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-config-data" (OuterVolumeSpecName: "config-data") pod "fbf332cd-a5a6-47e0-aae7-f4a93e446680" (UID: "fbf332cd-a5a6-47e0-aae7-f4a93e446680"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.479807 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbf332cd-a5a6-47e0-aae7-f4a93e446680" (UID: "fbf332cd-a5a6-47e0-aae7-f4a93e446680"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.491665 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fbf332cd-a5a6-47e0-aae7-f4a93e446680" (UID: "fbf332cd-a5a6-47e0-aae7-f4a93e446680"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.500453 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/fbf332cd-a5a6-47e0-aae7-f4a93e446680-kube-api-access-tgzb8\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.500487 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.500498 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf332cd-a5a6-47e0-aae7-f4a93e446680-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.500510 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.500520 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.504587 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fbf332cd-a5a6-47e0-aae7-f4a93e446680" (UID: "fbf332cd-a5a6-47e0-aae7-f4a93e446680"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.603177 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbf332cd-a5a6-47e0-aae7-f4a93e446680-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:58 crc kubenswrapper[4801]: I1124 21:31:58.733167 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9d8v"] Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.308514 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.312503 4801 generic.go:334] "Generic (PLEG): container finished" podID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerID="bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7" exitCode=0 Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.312651 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerDied","Data":"bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7"} Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.312753 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerStarted","Data":"0a163e09e571c54d528955e088783721969eb9c35faca8519336ab70cf1c4a77"} Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.348711 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.372297 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.393000 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:59 crc kubenswrapper[4801]: E1124 21:31:59.394115 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-api" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.394236 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-api" Nov 24 21:31:59 crc kubenswrapper[4801]: E1124 21:31:59.394332 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-log" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.394522 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-log" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.394916 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-log" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.395005 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" containerName="nova-api-api" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.396763 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.400693 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.401499 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.402947 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.434068 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05da65ec-b05b-46ff-886f-e800aae4b6b3-logs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.434150 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-public-tls-certs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.434177 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.434294 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-config-data\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.434355 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.434529 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2hg\" (UniqueName: \"kubernetes.io/projected/05da65ec-b05b-46ff-886f-e800aae4b6b3-kube-api-access-pl2hg\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.439087 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.541642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05da65ec-b05b-46ff-886f-e800aae4b6b3-logs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.541990 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-public-tls-certs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.542097 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.542215 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05da65ec-b05b-46ff-886f-e800aae4b6b3-logs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.542405 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-config-data\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.542537 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.542731 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2hg\" (UniqueName: \"kubernetes.io/projected/05da65ec-b05b-46ff-886f-e800aae4b6b3-kube-api-access-pl2hg\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.550161 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-public-tls-certs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.550613 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.551053 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.562511 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2hg\" (UniqueName: \"kubernetes.io/projected/05da65ec-b05b-46ff-886f-e800aae4b6b3-kube-api-access-pl2hg\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.571563 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05da65ec-b05b-46ff-886f-e800aae4b6b3-config-data\") pod \"nova-api-0\" (UID: \"05da65ec-b05b-46ff-886f-e800aae4b6b3\") " pod="openstack/nova-api-0" Nov 24 21:31:59 crc kubenswrapper[4801]: I1124 21:31:59.727891 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:32:00 crc kubenswrapper[4801]: I1124 21:32:00.253743 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:32:00 crc kubenswrapper[4801]: I1124 21:32:00.327527 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerStarted","Data":"13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74"} Nov 24 21:32:00 crc kubenswrapper[4801]: I1124 21:32:00.329967 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05da65ec-b05b-46ff-886f-e800aae4b6b3","Type":"ContainerStarted","Data":"d0cb7a20c758236942ecb2e82f4a573e9485948464cd0980a52c91c6c8275565"} Nov 24 21:32:00 crc kubenswrapper[4801]: I1124 21:32:00.591340 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": read tcp 10.217.0.2:39842->10.217.0.251:8775: read: connection reset by peer" Nov 24 21:32:00 crc kubenswrapper[4801]: I1124 21:32:00.591352 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": read tcp 10.217.0.2:39852->10.217.0.251:8775: read: connection reset by peer" Nov 24 21:32:00 crc kubenswrapper[4801]: I1124 21:32:00.684667 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf332cd-a5a6-47e0-aae7-f4a93e446680" path="/var/lib/kubelet/pods/fbf332cd-a5a6-47e0-aae7-f4a93e446680/volumes" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.254305 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.308788 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-config-data\") pod \"004c2726-7806-4617-91be-669d61e0a8c4\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.308887 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmxq\" (UniqueName: \"kubernetes.io/projected/004c2726-7806-4617-91be-669d61e0a8c4-kube-api-access-blmxq\") pod \"004c2726-7806-4617-91be-669d61e0a8c4\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.309015 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c2726-7806-4617-91be-669d61e0a8c4-logs\") pod \"004c2726-7806-4617-91be-669d61e0a8c4\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.309209 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-nova-metadata-tls-certs\") pod \"004c2726-7806-4617-91be-669d61e0a8c4\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.309263 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-combined-ca-bundle\") pod \"004c2726-7806-4617-91be-669d61e0a8c4\" (UID: \"004c2726-7806-4617-91be-669d61e0a8c4\") " Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.311577 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/004c2726-7806-4617-91be-669d61e0a8c4-logs" (OuterVolumeSpecName: "logs") pod "004c2726-7806-4617-91be-669d61e0a8c4" (UID: "004c2726-7806-4617-91be-669d61e0a8c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.319922 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004c2726-7806-4617-91be-669d61e0a8c4-kube-api-access-blmxq" (OuterVolumeSpecName: "kube-api-access-blmxq") pod "004c2726-7806-4617-91be-669d61e0a8c4" (UID: "004c2726-7806-4617-91be-669d61e0a8c4"). InnerVolumeSpecName "kube-api-access-blmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.395049 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05da65ec-b05b-46ff-886f-e800aae4b6b3","Type":"ContainerStarted","Data":"323c0ad07c6b4585a39f58fef69ac7cd49a23dbe34b72f7bb01c6cbb81765aef"} Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.395465 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05da65ec-b05b-46ff-886f-e800aae4b6b3","Type":"ContainerStarted","Data":"42236657de33e5f05f1da40f905a23bc51ffd76ce576c35829967637842e6b4c"} Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.395673 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-config-data" (OuterVolumeSpecName: "config-data") pod "004c2726-7806-4617-91be-669d61e0a8c4" (UID: "004c2726-7806-4617-91be-669d61e0a8c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.404654 4801 generic.go:334] "Generic (PLEG): container finished" podID="004c2726-7806-4617-91be-669d61e0a8c4" containerID="0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772" exitCode=0 Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.406946 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"004c2726-7806-4617-91be-669d61e0a8c4","Type":"ContainerDied","Data":"0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772"} Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.406997 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"004c2726-7806-4617-91be-669d61e0a8c4","Type":"ContainerDied","Data":"198d92ffbbc6260a322f36a84f16fe6252735cfeb422ae8f861a10f0d9e40a26"} Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.407022 4801 scope.go:117] "RemoveContainer" containerID="0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.407026 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.414300 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.414345 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmxq\" (UniqueName: \"kubernetes.io/projected/004c2726-7806-4617-91be-669d61e0a8c4-kube-api-access-blmxq\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.414389 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/004c2726-7806-4617-91be-669d61e0a8c4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.439857 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "004c2726-7806-4617-91be-669d61e0a8c4" (UID: "004c2726-7806-4617-91be-669d61e0a8c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.454347 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.45431457 podStartE2EDuration="2.45431457s" podCreationTimestamp="2025-11-24 21:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:32:01.430147741 +0000 UTC m=+1493.512734431" watchObservedRunningTime="2025-11-24 21:32:01.45431457 +0000 UTC m=+1493.536901240" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.474348 4801 scope.go:117] "RemoveContainer" containerID="74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.500929 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "004c2726-7806-4617-91be-669d61e0a8c4" (UID: "004c2726-7806-4617-91be-669d61e0a8c4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.514689 4801 scope.go:117] "RemoveContainer" containerID="0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772" Nov 24 21:32:01 crc kubenswrapper[4801]: E1124 21:32:01.516823 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772\": container with ID starting with 0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772 not found: ID does not exist" containerID="0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.516928 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772"} err="failed to get container status \"0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772\": rpc error: code = NotFound desc = could not find container \"0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772\": container with ID starting with 0b31f974212c64383ca4a189af77fd9810e9316b4bcccfbde255e88033776772 not found: ID does not exist" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.517018 4801 scope.go:117] "RemoveContainer" containerID="74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04" Nov 24 21:32:01 crc kubenswrapper[4801]: E1124 21:32:01.517420 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04\": container with ID starting with 74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04 not found: ID does not exist" containerID="74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.517495 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04"} err="failed to get container status \"74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04\": rpc error: code = NotFound desc = could not find container \"74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04\": container with ID starting with 74b6f1c081bbea7a3cb98b34423b26347cebf3fbdf59eb765f5472273e178a04 not found: ID does not exist" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.518192 4801 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.518218 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/004c2726-7806-4617-91be-669d61e0a8c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.757023 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.779345 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.829017 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:32:01 crc kubenswrapper[4801]: E1124 21:32:01.829749 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-log" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.829768 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-log" Nov 24 21:32:01 crc kubenswrapper[4801]: E1124 21:32:01.829778 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-metadata" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.829785 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-metadata" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.830043 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-metadata" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.830077 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="004c2726-7806-4617-91be-669d61e0a8c4" containerName="nova-metadata-log" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.838120 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.844257 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.848753 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.861276 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.939097 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-config-data\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.939144 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.939273 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdz6\" (UniqueName: \"kubernetes.io/projected/8fabb604-4b49-4341-8d2b-9d090f472937-kube-api-access-8zdz6\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.939348 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fabb604-4b49-4341-8d2b-9d090f472937-logs\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:01 crc kubenswrapper[4801]: I1124 21:32:01.939545 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.042335 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdz6\" (UniqueName: \"kubernetes.io/projected/8fabb604-4b49-4341-8d2b-9d090f472937-kube-api-access-8zdz6\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.042467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fabb604-4b49-4341-8d2b-9d090f472937-logs\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.042615 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.042644 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-config-data\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.042666 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.042948 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fabb604-4b49-4341-8d2b-9d090f472937-logs\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.052464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.054575 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-config-data\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.073146 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fabb604-4b49-4341-8d2b-9d090f472937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.085111 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdz6\" (UniqueName: \"kubernetes.io/projected/8fabb604-4b49-4341-8d2b-9d090f472937-kube-api-access-8zdz6\") pod \"nova-metadata-0\" (UID: \"8fabb604-4b49-4341-8d2b-9d090f472937\") " pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.243012 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:32:02 crc kubenswrapper[4801]: E1124 21:32:02.359633 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:32:02 crc kubenswrapper[4801]: E1124 21:32:02.364204 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:32:02 crc kubenswrapper[4801]: E1124 21:32:02.366005 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:32:02 crc kubenswrapper[4801]: E1124 21:32:02.366052 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" containerName="nova-scheduler-scheduler" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.441946 4801 generic.go:334] "Generic (PLEG): container finished" podID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerID="13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74" exitCode=0 Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.449441 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerDied","Data":"13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74"} Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.686905 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004c2726-7806-4617-91be-669d61e0a8c4" path="/var/lib/kubelet/pods/004c2726-7806-4617-91be-669d61e0a8c4/volumes" Nov 24 21:32:02 crc kubenswrapper[4801]: I1124 21:32:02.820328 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.292321 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.405104 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-combined-ca-bundle\") pod \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.407180 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tddfj\" (UniqueName: \"kubernetes.io/projected/02e0ae28-c649-4f69-95e0-5c8d61ee602c-kube-api-access-tddfj\") pod \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.407504 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-config-data\") pod \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\" (UID: \"02e0ae28-c649-4f69-95e0-5c8d61ee602c\") " Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.423018 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e0ae28-c649-4f69-95e0-5c8d61ee602c-kube-api-access-tddfj" (OuterVolumeSpecName: "kube-api-access-tddfj") pod "02e0ae28-c649-4f69-95e0-5c8d61ee602c" (UID: "02e0ae28-c649-4f69-95e0-5c8d61ee602c"). InnerVolumeSpecName "kube-api-access-tddfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.453084 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02e0ae28-c649-4f69-95e0-5c8d61ee602c" (UID: "02e0ae28-c649-4f69-95e0-5c8d61ee602c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.505848 4801 generic.go:334] "Generic (PLEG): container finished" podID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" exitCode=0 Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.505962 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02e0ae28-c649-4f69-95e0-5c8d61ee602c","Type":"ContainerDied","Data":"cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16"} Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.506004 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02e0ae28-c649-4f69-95e0-5c8d61ee602c","Type":"ContainerDied","Data":"6a710e16d4f483c3343bf6594734d07d0258cc43a3f74fb500dfc0f0ca08a247"} Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.506024 4801 scope.go:117] "RemoveContainer" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.508330 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.511664 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8fabb604-4b49-4341-8d2b-9d090f472937","Type":"ContainerStarted","Data":"7665fd73aabf73ed1aec6f013caee2a52046aa1aef767bf9224f3d1d73afd3d3"} Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.511715 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8fabb604-4b49-4341-8d2b-9d090f472937","Type":"ContainerStarted","Data":"3d0603a84944b17df9512ec903dd0c36dff15629ba556778a87683ab82d5ab4c"} Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.519061 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.519091 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tddfj\" (UniqueName: \"kubernetes.io/projected/02e0ae28-c649-4f69-95e0-5c8d61ee602c-kube-api-access-tddfj\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.524394 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerStarted","Data":"5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa"} Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.528475 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-config-data" (OuterVolumeSpecName: "config-data") pod "02e0ae28-c649-4f69-95e0-5c8d61ee602c" (UID: "02e0ae28-c649-4f69-95e0-5c8d61ee602c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.570314 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n9d8v" podStartSLOduration=2.893284454 podStartE2EDuration="6.570285549s" podCreationTimestamp="2025-11-24 21:31:57 +0000 UTC" firstStartedPulling="2025-11-24 21:31:59.316787752 +0000 UTC m=+1491.399374432" lastFinishedPulling="2025-11-24 21:32:02.993788857 +0000 UTC m=+1495.076375527" observedRunningTime="2025-11-24 21:32:03.549418423 +0000 UTC m=+1495.632005093" watchObservedRunningTime="2025-11-24 21:32:03.570285549 +0000 UTC m=+1495.652872219" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.575449 4801 scope.go:117] "RemoveContainer" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" Nov 24 21:32:03 crc kubenswrapper[4801]: E1124 21:32:03.576082 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16\": container with ID starting with cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16 not found: ID does not exist" containerID="cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.576143 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16"} err="failed to get container status \"cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16\": rpc error: code = NotFound desc = could not find container \"cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16\": container with ID starting with cb8a414ae4b35004e96ebc4fa0f88d2465913b3f08c4ad8229e25179ce8fbb16 not found: ID does not exist" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.621551 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e0ae28-c649-4f69-95e0-5c8d61ee602c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.953621 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.965780 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.979510 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:32:03 crc kubenswrapper[4801]: E1124 21:32:03.980150 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" containerName="nova-scheduler-scheduler" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.980170 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" containerName="nova-scheduler-scheduler" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.980434 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" containerName="nova-scheduler-scheduler" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.981483 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.983590 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:32:03 crc kubenswrapper[4801]: I1124 21:32:03.992006 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.136898 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.137448 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-config-data\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.137853 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrjn\" (UniqueName: \"kubernetes.io/projected/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-kube-api-access-vvrjn\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.240878 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvrjn\" (UniqueName: \"kubernetes.io/projected/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-kube-api-access-vvrjn\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.241009 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.241158 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-config-data\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.248835 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.249416 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-config-data\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.266692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvrjn\" (UniqueName: \"kubernetes.io/projected/107c2bb1-2ca8-4f37-a2e0-51d928f7a91e-kube-api-access-vvrjn\") pod \"nova-scheduler-0\" (UID: \"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e\") " pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.313222 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.557146 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8fabb604-4b49-4341-8d2b-9d090f472937","Type":"ContainerStarted","Data":"a535e4fdc3fe8fc11d8a0c3b49deae4acdfb2e6f9082d3e16abcdc5315c188cc"} Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.608068 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.608038812 podStartE2EDuration="3.608038812s" podCreationTimestamp="2025-11-24 21:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:32:04.580296822 +0000 UTC m=+1496.662883502" watchObservedRunningTime="2025-11-24 21:32:04.608038812 +0000 UTC m=+1496.690625482" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.700079 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e0ae28-c649-4f69-95e0-5c8d61ee602c" path="/var/lib/kubelet/pods/02e0ae28-c649-4f69-95e0-5c8d61ee602c/volumes" Nov 24 21:32:04 crc kubenswrapper[4801]: I1124 21:32:04.880737 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:32:05 crc kubenswrapper[4801]: I1124 21:32:05.573381 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e","Type":"ContainerStarted","Data":"80d0601487312a2ccdda83b4c70fac82bca5591f6ba1b9b1abd77d502bb52a5f"} Nov 24 21:32:05 crc kubenswrapper[4801]: I1124 21:32:05.573776 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"107c2bb1-2ca8-4f37-a2e0-51d928f7a91e","Type":"ContainerStarted","Data":"56e32d859b87bbcb33f6c41f46c1441d187ec54f8f5cb5bb3e11c00faf349c4b"} Nov 24 21:32:05 crc kubenswrapper[4801]: I1124 21:32:05.601317 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.601295405 podStartE2EDuration="2.601295405s" podCreationTimestamp="2025-11-24 21:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:32:05.593150733 +0000 UTC m=+1497.675737413" watchObservedRunningTime="2025-11-24 21:32:05.601295405 +0000 UTC m=+1497.683882075" Nov 24 21:32:07 crc kubenswrapper[4801]: I1124 21:32:07.243183 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:32:07 crc kubenswrapper[4801]: I1124 21:32:07.243742 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:32:07 crc kubenswrapper[4801]: I1124 21:32:07.840520 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:32:07 crc kubenswrapper[4801]: I1124 21:32:07.840599 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:32:07 crc kubenswrapper[4801]: I1124 21:32:07.929680 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:32:08 crc kubenswrapper[4801]: I1124 21:32:08.691287 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:32:08 crc kubenswrapper[4801]: I1124 21:32:08.760217 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n9d8v"] Nov 24 21:32:09 crc kubenswrapper[4801]: I1124 21:32:09.313488 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:32:09 crc kubenswrapper[4801]: I1124 21:32:09.728103 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:32:09 crc kubenswrapper[4801]: I1124 21:32:09.728675 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:32:10 crc kubenswrapper[4801]: I1124 21:32:10.652278 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n9d8v" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="registry-server" containerID="cri-o://5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa" gracePeriod=2 Nov 24 21:32:10 crc kubenswrapper[4801]: I1124 21:32:10.752576 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="05da65ec-b05b-46ff-886f-e800aae4b6b3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:32:10 crc kubenswrapper[4801]: I1124 21:32:10.752591 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="05da65ec-b05b-46ff-886f-e800aae4b6b3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.313544 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.496989 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-utilities\") pod \"57a85a02-257b-4a96-9c57-1bee074d2c30\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.497222 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk2pw\" (UniqueName: \"kubernetes.io/projected/57a85a02-257b-4a96-9c57-1bee074d2c30-kube-api-access-dk2pw\") pod \"57a85a02-257b-4a96-9c57-1bee074d2c30\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.497689 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-catalog-content\") pod \"57a85a02-257b-4a96-9c57-1bee074d2c30\" (UID: \"57a85a02-257b-4a96-9c57-1bee074d2c30\") " Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.497887 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-utilities" (OuterVolumeSpecName: "utilities") pod "57a85a02-257b-4a96-9c57-1bee074d2c30" (UID: "57a85a02-257b-4a96-9c57-1bee074d2c30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.498517 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.513239 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a85a02-257b-4a96-9c57-1bee074d2c30-kube-api-access-dk2pw" (OuterVolumeSpecName: "kube-api-access-dk2pw") pod "57a85a02-257b-4a96-9c57-1bee074d2c30" (UID: "57a85a02-257b-4a96-9c57-1bee074d2c30"). InnerVolumeSpecName "kube-api-access-dk2pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.553667 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a85a02-257b-4a96-9c57-1bee074d2c30" (UID: "57a85a02-257b-4a96-9c57-1bee074d2c30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.601016 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a85a02-257b-4a96-9c57-1bee074d2c30-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.601054 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk2pw\" (UniqueName: \"kubernetes.io/projected/57a85a02-257b-4a96-9c57-1bee074d2c30-kube-api-access-dk2pw\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.677295 4801 generic.go:334] "Generic (PLEG): container finished" podID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerID="5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa" exitCode=0 Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.677443 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9d8v" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.677421 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerDied","Data":"5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa"} Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.677618 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9d8v" event={"ID":"57a85a02-257b-4a96-9c57-1bee074d2c30","Type":"ContainerDied","Data":"0a163e09e571c54d528955e088783721969eb9c35faca8519336ab70cf1c4a77"} Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.677646 4801 scope.go:117] "RemoveContainer" containerID="5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.741663 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n9d8v"] Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.745212 4801 scope.go:117] "RemoveContainer" containerID="13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.754954 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n9d8v"] Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.789674 4801 scope.go:117] "RemoveContainer" containerID="bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.841689 4801 scope.go:117] "RemoveContainer" containerID="5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa" Nov 24 21:32:11 crc kubenswrapper[4801]: E1124 21:32:11.842470 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa\": container with ID starting with 5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa not found: ID does not exist" containerID="5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.842516 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa"} err="failed to get container status \"5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa\": rpc error: code = NotFound desc = could not find container \"5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa\": container with ID starting with 5c5ffbe3ebea4c2c3f2d8ecd171846b53a75b881c0006f441fc808b6cc297ffa not found: ID does not exist" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.842551 4801 scope.go:117] "RemoveContainer" containerID="13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74" Nov 24 21:32:11 crc kubenswrapper[4801]: E1124 21:32:11.843083 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74\": container with ID starting with 13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74 not found: ID does not exist" containerID="13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.843119 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74"} err="failed to get container status \"13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74\": rpc error: code = NotFound desc = could not find container \"13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74\": container with ID starting with 13fd89cf0b27bab9a54e5c8837734ad3e9c1e6e18cf7c5e7cb2d3851b0c44b74 not found: ID does not exist" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.843169 4801 scope.go:117] "RemoveContainer" containerID="bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7" Nov 24 21:32:11 crc kubenswrapper[4801]: E1124 21:32:11.844521 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7\": container with ID starting with bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7 not found: ID does not exist" containerID="bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7" Nov 24 21:32:11 crc kubenswrapper[4801]: I1124 21:32:11.844544 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7"} err="failed to get container status \"bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7\": rpc error: code = NotFound desc = could not find container \"bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7\": container with ID starting with bb2de0bb4574a7babe2de9054a77348ba62a029671b12cd66ab576fb6f3323d7 not found: ID does not exist" Nov 24 21:32:12 crc kubenswrapper[4801]: I1124 21:32:12.244121 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:32:12 crc kubenswrapper[4801]: I1124 21:32:12.244200 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:32:12 crc kubenswrapper[4801]: I1124 21:32:12.686091 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" path="/var/lib/kubelet/pods/57a85a02-257b-4a96-9c57-1bee074d2c30/volumes" Nov 24 21:32:13 crc kubenswrapper[4801]: I1124 21:32:13.253587 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8fabb604-4b49-4341-8d2b-9d090f472937" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:32:13 crc kubenswrapper[4801]: I1124 21:32:13.253598 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8fabb604-4b49-4341-8d2b-9d090f472937" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:32:14 crc kubenswrapper[4801]: I1124 21:32:14.313520 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:32:14 crc kubenswrapper[4801]: I1124 21:32:14.365062 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:32:14 crc kubenswrapper[4801]: I1124 21:32:14.782412 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.569873 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.735028 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.736570 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.737678 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.746172 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.799981 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:32:19 crc kubenswrapper[4801]: I1124 21:32:19.806031 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:32:22 crc kubenswrapper[4801]: I1124 21:32:22.251520 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:32:22 crc kubenswrapper[4801]: I1124 21:32:22.252497 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:32:22 crc kubenswrapper[4801]: I1124 21:32:22.256492 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:32:22 crc kubenswrapper[4801]: I1124 21:32:22.267436 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.319947 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.320536 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.320617 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.322031 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.322101 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" gracePeriod=600 Nov 24 21:32:24 crc kubenswrapper[4801]: E1124 21:32:24.447328 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.885885 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" exitCode=0 Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.886072 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6"} Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.887492 4801 scope.go:117] "RemoveContainer" containerID="71554722c44235bb81cd2780183b2b3394df41c31c6f1cdedb2967dd32989a7b" Nov 24 21:32:24 crc kubenswrapper[4801]: I1124 21:32:24.888656 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:32:24 crc kubenswrapper[4801]: E1124 21:32:24.889033 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.094990 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.095264 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" containerName="kube-state-metrics" containerID="cri-o://1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367" gracePeriod=30 Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.290804 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.291420 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="44202548-736a-47cc-93f0-622fca103c29" containerName="mysqld-exporter" containerID="cri-o://b80e967374077c3a72433b7e49bbdb652565636392607e2604f0aa2f0b071d47" gracePeriod=30 Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.833706 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.928261 4801 generic.go:334] "Generic (PLEG): container finished" podID="44202548-736a-47cc-93f0-622fca103c29" containerID="b80e967374077c3a72433b7e49bbdb652565636392607e2604f0aa2f0b071d47" exitCode=2 Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.928344 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"44202548-736a-47cc-93f0-622fca103c29","Type":"ContainerDied","Data":"b80e967374077c3a72433b7e49bbdb652565636392607e2604f0aa2f0b071d47"} Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.931496 4801 generic.go:334] "Generic (PLEG): container finished" podID="41c9700a-0355-4e6d-82d0-934fc45f5d52" containerID="1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367" exitCode=2 Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.931561 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41c9700a-0355-4e6d-82d0-934fc45f5d52","Type":"ContainerDied","Data":"1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367"} Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.931580 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41c9700a-0355-4e6d-82d0-934fc45f5d52","Type":"ContainerDied","Data":"24812beac58c05e6d8e5f42e59699163b6043b985234eb78a3fae7106a2b1670"} Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.931599 4801 scope.go:117] "RemoveContainer" containerID="1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367" Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.931619 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.947176 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9gz7\" (UniqueName: \"kubernetes.io/projected/41c9700a-0355-4e6d-82d0-934fc45f5d52-kube-api-access-p9gz7\") pod \"41c9700a-0355-4e6d-82d0-934fc45f5d52\" (UID: \"41c9700a-0355-4e6d-82d0-934fc45f5d52\") " Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.962391 4801 scope.go:117] "RemoveContainer" containerID="1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367" Nov 24 21:32:25 crc kubenswrapper[4801]: E1124 21:32:25.963054 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367\": container with ID starting with 1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367 not found: ID does not exist" containerID="1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367" Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.963183 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367"} err="failed to get container status \"1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367\": rpc error: code = NotFound desc = could not find container \"1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367\": container with ID starting with 1ed2a4595aa02b6c6a17384e2c31ad7b5f28f1e8418f82f473fb9c26a90c5367 not found: ID does not exist" Nov 24 21:32:25 crc kubenswrapper[4801]: I1124 21:32:25.968299 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c9700a-0355-4e6d-82d0-934fc45f5d52-kube-api-access-p9gz7" (OuterVolumeSpecName: "kube-api-access-p9gz7") pod "41c9700a-0355-4e6d-82d0-934fc45f5d52" (UID: "41c9700a-0355-4e6d-82d0-934fc45f5d52"). InnerVolumeSpecName "kube-api-access-p9gz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.055977 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9gz7\" (UniqueName: \"kubernetes.io/projected/41c9700a-0355-4e6d-82d0-934fc45f5d52-kube-api-access-p9gz7\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.393104 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.407704 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.420038 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.465720 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9q2c\" (UniqueName: \"kubernetes.io/projected/44202548-736a-47cc-93f0-622fca103c29-kube-api-access-q9q2c\") pod \"44202548-736a-47cc-93f0-622fca103c29\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.465959 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-config-data\") pod \"44202548-736a-47cc-93f0-622fca103c29\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.466053 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-combined-ca-bundle\") pod \"44202548-736a-47cc-93f0-622fca103c29\" (UID: \"44202548-736a-47cc-93f0-622fca103c29\") " Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.472827 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44202548-736a-47cc-93f0-622fca103c29-kube-api-access-q9q2c" (OuterVolumeSpecName: "kube-api-access-q9q2c") pod "44202548-736a-47cc-93f0-622fca103c29" (UID: "44202548-736a-47cc-93f0-622fca103c29"). InnerVolumeSpecName "kube-api-access-q9q2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.484191 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:32:26 crc kubenswrapper[4801]: E1124 21:32:26.484980 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" containerName="kube-state-metrics" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485002 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" containerName="kube-state-metrics" Nov 24 21:32:26 crc kubenswrapper[4801]: E1124 21:32:26.485034 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="registry-server" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485042 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="registry-server" Nov 24 21:32:26 crc kubenswrapper[4801]: E1124 21:32:26.485058 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44202548-736a-47cc-93f0-622fca103c29" containerName="mysqld-exporter" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485066 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="44202548-736a-47cc-93f0-622fca103c29" containerName="mysqld-exporter" Nov 24 21:32:26 crc kubenswrapper[4801]: E1124 21:32:26.485076 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="extract-content" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485082 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="extract-content" Nov 24 21:32:26 crc kubenswrapper[4801]: E1124 21:32:26.485110 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="extract-utilities" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485118 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="extract-utilities" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485394 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="44202548-736a-47cc-93f0-622fca103c29" containerName="mysqld-exporter" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485431 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a85a02-257b-4a96-9c57-1bee074d2c30" containerName="registry-server" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.485450 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" containerName="kube-state-metrics" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.486588 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.492348 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.494085 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.495246 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.535275 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44202548-736a-47cc-93f0-622fca103c29" (UID: "44202548-736a-47cc-93f0-622fca103c29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.552186 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-config-data" (OuterVolumeSpecName: "config-data") pod "44202548-736a-47cc-93f0-622fca103c29" (UID: "44202548-736a-47cc-93f0-622fca103c29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570435 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkqwm\" (UniqueName: \"kubernetes.io/projected/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-api-access-rkqwm\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570519 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570549 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570639 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570756 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570777 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44202548-736a-47cc-93f0-622fca103c29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.570819 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9q2c\" (UniqueName: \"kubernetes.io/projected/44202548-736a-47cc-93f0-622fca103c29-kube-api-access-q9q2c\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.674275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkqwm\" (UniqueName: \"kubernetes.io/projected/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-api-access-rkqwm\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.674411 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.674445 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.674573 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.678943 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.680042 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.682943 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c9700a-0355-4e6d-82d0-934fc45f5d52" path="/var/lib/kubelet/pods/41c9700a-0355-4e6d-82d0-934fc45f5d52/volumes" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.685396 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.693490 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkqwm\" (UniqueName: \"kubernetes.io/projected/25c22e81-bd05-43b1-8cc2-7a460fdeb76f-kube-api-access-rkqwm\") pod \"kube-state-metrics-0\" (UID: \"25c22e81-bd05-43b1-8cc2-7a460fdeb76f\") " pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.817229 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.977107 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"44202548-736a-47cc-93f0-622fca103c29","Type":"ContainerDied","Data":"bfb5b211d7f7982d45f1166b8b31a37432d7480f1f2ef38c88c12206a3bb22f9"} Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.977660 4801 scope.go:117] "RemoveContainer" containerID="b80e967374077c3a72433b7e49bbdb652565636392607e2604f0aa2f0b071d47" Nov 24 21:32:26 crc kubenswrapper[4801]: I1124 21:32:26.977862 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.020735 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.042220 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.062754 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.065034 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.068867 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.068915 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.078666 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.194690 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shpl\" (UniqueName: \"kubernetes.io/projected/54d96f58-e273-4784-b806-ff37bf6267de-kube-api-access-6shpl\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.195625 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.195665 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-config-data\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.195750 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.298197 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shpl\" (UniqueName: \"kubernetes.io/projected/54d96f58-e273-4784-b806-ff37bf6267de-kube-api-access-6shpl\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.298467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.298499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-config-data\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.299639 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.305914 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-config-data\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.306522 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.308262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d96f58-e273-4784-b806-ff37bf6267de-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.334468 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shpl\" (UniqueName: \"kubernetes.io/projected/54d96f58-e273-4784-b806-ff37bf6267de-kube-api-access-6shpl\") pod \"mysqld-exporter-0\" (UID: \"54d96f58-e273-4784-b806-ff37bf6267de\") " pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: W1124 21:32:27.361054 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c22e81_bd05_43b1_8cc2_7a460fdeb76f.slice/crio-9a3d1a7f9aad0012988ae6f1643083b2f6023482e2d613629e87ebeb6617446d WatchSource:0}: Error finding container 9a3d1a7f9aad0012988ae6f1643083b2f6023482e2d613629e87ebeb6617446d: Status 404 returned error can't find the container with id 9a3d1a7f9aad0012988ae6f1643083b2f6023482e2d613629e87ebeb6617446d Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.364444 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.364519 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.396614 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.697022 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.697913 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-central-agent" containerID="cri-o://9f14d2aaba6b834de66e844fcb38f653d246287e1a41376eddaacadd1f5474b3" gracePeriod=30 Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.698088 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="proxy-httpd" containerID="cri-o://242236f6246faa7ff2807aadcd27170a4f9b2380436e4933907ab849ea46e8b3" gracePeriod=30 Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.698148 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="sg-core" containerID="cri-o://1998ebf6d75f12234c1cd99e4e0ad5e985d38a794c7c6429c349c4af7a7d0806" gracePeriod=30 Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.698196 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-notification-agent" containerID="cri-o://aefe58858a235a79c536db83fd99cc54f99ecda07a635d66782b89f3fde08282" gracePeriod=30 Nov 24 21:32:27 crc kubenswrapper[4801]: I1124 21:32:27.905710 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.010047 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"54d96f58-e273-4784-b806-ff37bf6267de","Type":"ContainerStarted","Data":"756f572d60be52c5089ef0d8d8bd26063c572052a02fcffa603e834479111b12"} Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.013862 4801 generic.go:334] "Generic (PLEG): container finished" podID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerID="242236f6246faa7ff2807aadcd27170a4f9b2380436e4933907ab849ea46e8b3" exitCode=0 Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.013899 4801 generic.go:334] "Generic (PLEG): container finished" podID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerID="1998ebf6d75f12234c1cd99e4e0ad5e985d38a794c7c6429c349c4af7a7d0806" exitCode=2 Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.013959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerDied","Data":"242236f6246faa7ff2807aadcd27170a4f9b2380436e4933907ab849ea46e8b3"} Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.014020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerDied","Data":"1998ebf6d75f12234c1cd99e4e0ad5e985d38a794c7c6429c349c4af7a7d0806"} Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.016089 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"25c22e81-bd05-43b1-8cc2-7a460fdeb76f","Type":"ContainerStarted","Data":"9a3d1a7f9aad0012988ae6f1643083b2f6023482e2d613629e87ebeb6617446d"} Nov 24 21:32:28 crc kubenswrapper[4801]: I1124 21:32:28.684192 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44202548-736a-47cc-93f0-622fca103c29" path="/var/lib/kubelet/pods/44202548-736a-47cc-93f0-622fca103c29/volumes" Nov 24 21:32:29 crc kubenswrapper[4801]: I1124 21:32:29.038660 4801 generic.go:334] "Generic (PLEG): container finished" podID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerID="9f14d2aaba6b834de66e844fcb38f653d246287e1a41376eddaacadd1f5474b3" exitCode=0 Nov 24 21:32:29 crc kubenswrapper[4801]: I1124 21:32:29.038877 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerDied","Data":"9f14d2aaba6b834de66e844fcb38f653d246287e1a41376eddaacadd1f5474b3"} Nov 24 21:32:29 crc kubenswrapper[4801]: I1124 21:32:29.041630 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"25c22e81-bd05-43b1-8cc2-7a460fdeb76f","Type":"ContainerStarted","Data":"31e7cc2a8b09d955a82e5fc465771ada1c3f62e25ad772afc7d27967a43a5c24"} Nov 24 21:32:29 crc kubenswrapper[4801]: I1124 21:32:29.041945 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:32:29 crc kubenswrapper[4801]: I1124 21:32:29.073554 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.656220578 podStartE2EDuration="3.073525875s" podCreationTimestamp="2025-11-24 21:32:26 +0000 UTC" firstStartedPulling="2025-11-24 21:32:27.364092338 +0000 UTC m=+1519.446679018" lastFinishedPulling="2025-11-24 21:32:27.781397645 +0000 UTC m=+1519.863984315" observedRunningTime="2025-11-24 21:32:29.063246385 +0000 UTC m=+1521.145833075" watchObservedRunningTime="2025-11-24 21:32:29.073525875 +0000 UTC m=+1521.156112545" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.081223 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"54d96f58-e273-4784-b806-ff37bf6267de","Type":"ContainerStarted","Data":"b618cc1a8b3e848c727a1fc65318b8c398461eb1ceb4c1ea8d9d217482eaa857"} Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.101483 4801 generic.go:334] "Generic (PLEG): container finished" podID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerID="aefe58858a235a79c536db83fd99cc54f99ecda07a635d66782b89f3fde08282" exitCode=0 Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.102566 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerDied","Data":"aefe58858a235a79c536db83fd99cc54f99ecda07a635d66782b89f3fde08282"} Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.111839 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.712509812 podStartE2EDuration="3.111807924s" podCreationTimestamp="2025-11-24 21:32:27 +0000 UTC" firstStartedPulling="2025-11-24 21:32:27.916416701 +0000 UTC m=+1519.999003371" lastFinishedPulling="2025-11-24 21:32:29.315714813 +0000 UTC m=+1521.398301483" observedRunningTime="2025-11-24 21:32:30.097109828 +0000 UTC m=+1522.179696488" watchObservedRunningTime="2025-11-24 21:32:30.111807924 +0000 UTC m=+1522.194394614" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.221627 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.312946 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-combined-ca-bundle\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313169 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-log-httpd\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313269 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-config-data\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313317 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-sg-core-conf-yaml\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313426 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-run-httpd\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313510 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5cs\" (UniqueName: \"kubernetes.io/projected/e6d6d840-7596-415b-a988-b6980ad6cd3e-kube-api-access-wm5cs\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313717 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313730 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.313749 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-scripts\") pod \"e6d6d840-7596-415b-a988-b6980ad6cd3e\" (UID: \"e6d6d840-7596-415b-a988-b6980ad6cd3e\") " Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.315632 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.315660 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6d6d840-7596-415b-a988-b6980ad6cd3e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.321268 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d6d840-7596-415b-a988-b6980ad6cd3e-kube-api-access-wm5cs" (OuterVolumeSpecName: "kube-api-access-wm5cs") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "kube-api-access-wm5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.321734 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-scripts" (OuterVolumeSpecName: "scripts") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.365592 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.418669 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.418702 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5cs\" (UniqueName: \"kubernetes.io/projected/e6d6d840-7596-415b-a988-b6980ad6cd3e-kube-api-access-wm5cs\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.418713 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.425842 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.510877 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-config-data" (OuterVolumeSpecName: "config-data") pod "e6d6d840-7596-415b-a988-b6980ad6cd3e" (UID: "e6d6d840-7596-415b-a988-b6980ad6cd3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.520934 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:30 crc kubenswrapper[4801]: I1124 21:32:30.520971 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6d840-7596-415b-a988-b6980ad6cd3e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.117945 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6d6d840-7596-415b-a988-b6980ad6cd3e","Type":"ContainerDied","Data":"d24ebc3ab692d026d868191dc9de2076aa8b845dc71c7b1c76aae09627029bd4"} Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.118537 4801 scope.go:117] "RemoveContainer" containerID="242236f6246faa7ff2807aadcd27170a4f9b2380436e4933907ab849ea46e8b3" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.118035 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.153469 4801 scope.go:117] "RemoveContainer" containerID="1998ebf6d75f12234c1cd99e4e0ad5e985d38a794c7c6429c349c4af7a7d0806" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.157612 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.173223 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.185879 4801 scope.go:117] "RemoveContainer" containerID="aefe58858a235a79c536db83fd99cc54f99ecda07a635d66782b89f3fde08282" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.187856 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:31 crc kubenswrapper[4801]: E1124 21:32:31.188682 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-central-agent" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.188702 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-central-agent" Nov 24 21:32:31 crc kubenswrapper[4801]: E1124 21:32:31.188736 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="proxy-httpd" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.188743 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="proxy-httpd" Nov 24 21:32:31 crc kubenswrapper[4801]: E1124 21:32:31.188753 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-notification-agent" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.188759 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-notification-agent" Nov 24 21:32:31 crc kubenswrapper[4801]: E1124 21:32:31.188794 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="sg-core" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.188800 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="sg-core" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.189025 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="proxy-httpd" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.189041 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="sg-core" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.189055 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-notification-agent" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.189066 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" containerName="ceilometer-central-agent" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.191612 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.194207 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.195161 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.196280 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.200286 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.240805 4801 scope.go:117] "RemoveContainer" containerID="9f14d2aaba6b834de66e844fcb38f653d246287e1a41376eddaacadd1f5474b3" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.242613 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.242725 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.242844 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-scripts\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.242925 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.242946 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-config-data\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.242961 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.243012 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.243036 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dql5t\" (UniqueName: \"kubernetes.io/projected/bbc7f909-304a-44f3-9deb-9845777b13aa-kube-api-access-dql5t\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344461 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344521 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dql5t\" (UniqueName: \"kubernetes.io/projected/bbc7f909-304a-44f3-9deb-9845777b13aa-kube-api-access-dql5t\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344582 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344629 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344697 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-scripts\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344759 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344776 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-config-data\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.344790 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.345316 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.346812 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.351790 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-config-data\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.364493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.364819 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.364889 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-scripts\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.365092 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.368659 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dql5t\" (UniqueName: \"kubernetes.io/projected/bbc7f909-304a-44f3-9deb-9845777b13aa-kube-api-access-dql5t\") pod \"ceilometer-0\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " pod="openstack/ceilometer-0" Nov 24 21:32:31 crc kubenswrapper[4801]: I1124 21:32:31.524104 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:32 crc kubenswrapper[4801]: W1124 21:32:32.103738 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc7f909_304a_44f3_9deb_9845777b13aa.slice/crio-ce4ae35989db9e9a53fe59348cd156b70c95090bf4cdf2a07a22c26c47cd5e91 WatchSource:0}: Error finding container ce4ae35989db9e9a53fe59348cd156b70c95090bf4cdf2a07a22c26c47cd5e91: Status 404 returned error can't find the container with id ce4ae35989db9e9a53fe59348cd156b70c95090bf4cdf2a07a22c26c47cd5e91 Nov 24 21:32:32 crc kubenswrapper[4801]: I1124 21:32:32.109861 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:32 crc kubenswrapper[4801]: I1124 21:32:32.147585 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerStarted","Data":"ce4ae35989db9e9a53fe59348cd156b70c95090bf4cdf2a07a22c26c47cd5e91"} Nov 24 21:32:32 crc kubenswrapper[4801]: I1124 21:32:32.686816 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d6d840-7596-415b-a988-b6980ad6cd3e" path="/var/lib/kubelet/pods/e6d6d840-7596-415b-a988-b6980ad6cd3e/volumes" Nov 24 21:32:33 crc kubenswrapper[4801]: I1124 21:32:33.168631 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerStarted","Data":"7178c953b2b95c6cc430e163016f1c4cea918beca04500bc91e7b61baec56d95"} Nov 24 21:32:34 crc kubenswrapper[4801]: I1124 21:32:34.195348 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerStarted","Data":"346b9a0152ef7b8919e434889260f27c4d277081bfce54e9c8e5e17cbb44dd55"} Nov 24 21:32:34 crc kubenswrapper[4801]: I1124 21:32:34.973919 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-x7r2c"] Nov 24 21:32:34 crc kubenswrapper[4801]: I1124 21:32:34.998444 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-x7r2c"] Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.072782 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qsj4k"] Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.074993 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.103942 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qsj4k"] Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.190022 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-config-data\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.190197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pnr4\" (UniqueName: \"kubernetes.io/projected/36e49661-1ce6-4040-a709-54ce907173d5-kube-api-access-7pnr4\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.190228 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-combined-ca-bundle\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.209848 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerStarted","Data":"09ee968e165f5c61acf169e56bf3d17a2ada70c13c5ccf03ed554d9488393c9e"} Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.292967 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-config-data\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.293152 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pnr4\" (UniqueName: \"kubernetes.io/projected/36e49661-1ce6-4040-a709-54ce907173d5-kube-api-access-7pnr4\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.293184 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-combined-ca-bundle\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.299693 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-combined-ca-bundle\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.335434 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-config-data\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.349887 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pnr4\" (UniqueName: \"kubernetes.io/projected/36e49661-1ce6-4040-a709-54ce907173d5-kube-api-access-7pnr4\") pod \"heat-db-sync-qsj4k\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.402785 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qsj4k" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.667918 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:32:35 crc kubenswrapper[4801]: E1124 21:32:35.669216 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:32:35 crc kubenswrapper[4801]: I1124 21:32:35.935671 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qsj4k"] Nov 24 21:32:36 crc kubenswrapper[4801]: I1124 21:32:36.228282 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qsj4k" event={"ID":"36e49661-1ce6-4040-a709-54ce907173d5","Type":"ContainerStarted","Data":"ac1ece7c4c55a775633495d820abc71098011c78ad572d2740940fd02c6f91a7"} Nov 24 21:32:36 crc kubenswrapper[4801]: I1124 21:32:36.232941 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerStarted","Data":"8754616d299cb2f9ab7e3f647e4597755f8e6c6d562cb587989d32c09a7404c1"} Nov 24 21:32:36 crc kubenswrapper[4801]: I1124 21:32:36.233225 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:32:36 crc kubenswrapper[4801]: I1124 21:32:36.686198 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a6644f-aa22-41cb-bf2a-4930db050d45" path="/var/lib/kubelet/pods/45a6644f-aa22-41cb-bf2a-4930db050d45/volumes" Nov 24 21:32:36 crc kubenswrapper[4801]: I1124 21:32:36.833055 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:32:36 crc kubenswrapper[4801]: I1124 21:32:36.882450 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.218749404 podStartE2EDuration="5.882406246s" podCreationTimestamp="2025-11-24 21:32:31 +0000 UTC" firstStartedPulling="2025-11-24 21:32:32.108735532 +0000 UTC m=+1524.191322212" lastFinishedPulling="2025-11-24 21:32:35.772392384 +0000 UTC m=+1527.854979054" observedRunningTime="2025-11-24 21:32:36.266038267 +0000 UTC m=+1528.348624957" watchObservedRunningTime="2025-11-24 21:32:36.882406246 +0000 UTC m=+1528.964992916" Nov 24 21:32:37 crc kubenswrapper[4801]: I1124 21:32:37.476814 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:32:38 crc kubenswrapper[4801]: I1124 21:32:38.434121 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:32:40 crc kubenswrapper[4801]: I1124 21:32:40.644458 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:40 crc kubenswrapper[4801]: I1124 21:32:40.645977 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-central-agent" containerID="cri-o://7178c953b2b95c6cc430e163016f1c4cea918beca04500bc91e7b61baec56d95" gracePeriod=30 Nov 24 21:32:40 crc kubenswrapper[4801]: I1124 21:32:40.646750 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="proxy-httpd" containerID="cri-o://8754616d299cb2f9ab7e3f647e4597755f8e6c6d562cb587989d32c09a7404c1" gracePeriod=30 Nov 24 21:32:40 crc kubenswrapper[4801]: I1124 21:32:40.646802 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="sg-core" containerID="cri-o://09ee968e165f5c61acf169e56bf3d17a2ada70c13c5ccf03ed554d9488393c9e" gracePeriod=30 Nov 24 21:32:40 crc kubenswrapper[4801]: I1124 21:32:40.646840 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-notification-agent" containerID="cri-o://346b9a0152ef7b8919e434889260f27c4d277081bfce54e9c8e5e17cbb44dd55" gracePeriod=30 Nov 24 21:32:41 crc kubenswrapper[4801]: I1124 21:32:41.351719 4801 generic.go:334] "Generic (PLEG): container finished" podID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerID="8754616d299cb2f9ab7e3f647e4597755f8e6c6d562cb587989d32c09a7404c1" exitCode=0 Nov 24 21:32:41 crc kubenswrapper[4801]: I1124 21:32:41.351765 4801 generic.go:334] "Generic (PLEG): container finished" podID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerID="09ee968e165f5c61acf169e56bf3d17a2ada70c13c5ccf03ed554d9488393c9e" exitCode=2 Nov 24 21:32:41 crc kubenswrapper[4801]: I1124 21:32:41.351794 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerDied","Data":"8754616d299cb2f9ab7e3f647e4597755f8e6c6d562cb587989d32c09a7404c1"} Nov 24 21:32:41 crc kubenswrapper[4801]: I1124 21:32:41.351833 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerDied","Data":"09ee968e165f5c61acf169e56bf3d17a2ada70c13c5ccf03ed554d9488393c9e"} Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.370038 4801 generic.go:334] "Generic (PLEG): container finished" podID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerID="346b9a0152ef7b8919e434889260f27c4d277081bfce54e9c8e5e17cbb44dd55" exitCode=0 Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.370719 4801 generic.go:334] "Generic (PLEG): container finished" podID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerID="7178c953b2b95c6cc430e163016f1c4cea918beca04500bc91e7b61baec56d95" exitCode=0 Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.370761 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerDied","Data":"346b9a0152ef7b8919e434889260f27c4d277081bfce54e9c8e5e17cbb44dd55"} Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.370798 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerDied","Data":"7178c953b2b95c6cc430e163016f1c4cea918beca04500bc91e7b61baec56d95"} Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.517572 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.676948 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-sg-core-conf-yaml\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677123 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-run-httpd\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677170 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-scripts\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677191 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-ceilometer-tls-certs\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677392 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-log-httpd\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677443 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-config-data\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677486 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-combined-ca-bundle\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.677517 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dql5t\" (UniqueName: \"kubernetes.io/projected/bbc7f909-304a-44f3-9deb-9845777b13aa-kube-api-access-dql5t\") pod \"bbc7f909-304a-44f3-9deb-9845777b13aa\" (UID: \"bbc7f909-304a-44f3-9deb-9845777b13aa\") " Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.681461 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.681750 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.706700 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-scripts" (OuterVolumeSpecName: "scripts") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.732548 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc7f909-304a-44f3-9deb-9845777b13aa-kube-api-access-dql5t" (OuterVolumeSpecName: "kube-api-access-dql5t") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "kube-api-access-dql5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.743808 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.765541 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" containerID="cri-o://7fe476b866813afe8c14c393885c5c7f6a9df092d39f3acccf375873e74f50d1" gracePeriod=604795 Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.780993 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.781032 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.781041 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.781050 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc7f909-304a-44f3-9deb-9845777b13aa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.781060 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dql5t\" (UniqueName: \"kubernetes.io/projected/bbc7f909-304a-44f3-9deb-9845777b13aa-kube-api-access-dql5t\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.782439 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.823458 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.883633 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.883680 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.885344 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-config-data" (OuterVolumeSpecName: "config-data") pod "bbc7f909-304a-44f3-9deb-9845777b13aa" (UID: "bbc7f909-304a-44f3-9deb-9845777b13aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:42 crc kubenswrapper[4801]: I1124 21:32:42.986517 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc7f909-304a-44f3-9deb-9845777b13aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.391782 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc7f909-304a-44f3-9deb-9845777b13aa","Type":"ContainerDied","Data":"ce4ae35989db9e9a53fe59348cd156b70c95090bf4cdf2a07a22c26c47cd5e91"} Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.391868 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.391879 4801 scope.go:117] "RemoveContainer" containerID="8754616d299cb2f9ab7e3f647e4597755f8e6c6d562cb587989d32c09a7404c1" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.442939 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.455293 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.468262 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:43 crc kubenswrapper[4801]: E1124 21:32:43.468959 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-central-agent" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.468979 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-central-agent" Nov 24 21:32:43 crc kubenswrapper[4801]: E1124 21:32:43.469000 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="proxy-httpd" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469089 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="proxy-httpd" Nov 24 21:32:43 crc kubenswrapper[4801]: E1124 21:32:43.469151 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-notification-agent" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469158 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-notification-agent" Nov 24 21:32:43 crc kubenswrapper[4801]: E1124 21:32:43.469180 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="sg-core" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469186 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="sg-core" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469434 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-central-agent" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469457 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="sg-core" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469466 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="proxy-httpd" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.469481 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" containerName="ceilometer-notification-agent" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.471957 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.476070 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.476258 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.476423 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.490895 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.605942 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606238 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606349 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-config-data\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606496 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-scripts\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606600 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-log-httpd\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606673 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-run-httpd\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606823 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r559c\" (UniqueName: \"kubernetes.io/projected/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-kube-api-access-r559c\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.606955 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.710084 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r559c\" (UniqueName: \"kubernetes.io/projected/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-kube-api-access-r559c\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.710549 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.711212 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.711250 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.711273 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-config-data\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.711337 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-scripts\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.711418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-log-httpd\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.711447 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-run-httpd\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.712022 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-log-httpd\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.712050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-run-httpd\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.716764 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.717108 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.717163 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-scripts\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.718145 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.720383 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-config-data\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.726638 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r559c\" (UniqueName: \"kubernetes.io/projected/94ec5195-8cde-44c7-8d98-7bf19a4b20e0-kube-api-access-r559c\") pod \"ceilometer-0\" (UID: \"94ec5195-8cde-44c7-8d98-7bf19a4b20e0\") " pod="openstack/ceilometer-0" Nov 24 21:32:43 crc kubenswrapper[4801]: I1124 21:32:43.827912 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:32:44 crc kubenswrapper[4801]: I1124 21:32:44.527474 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" containerID="cri-o://fc1f3038cdaabe8ddb5a1dbea4535a51d079fafea1c55116464b23e1c2fa69e9" gracePeriod=604794 Nov 24 21:32:44 crc kubenswrapper[4801]: I1124 21:32:44.683848 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc7f909-304a-44f3-9deb-9845777b13aa" path="/var/lib/kubelet/pods/bbc7f909-304a-44f3-9deb-9845777b13aa/volumes" Nov 24 21:32:45 crc kubenswrapper[4801]: I1124 21:32:45.079017 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Nov 24 21:32:45 crc kubenswrapper[4801]: I1124 21:32:45.445683 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Nov 24 21:32:46 crc kubenswrapper[4801]: I1124 21:32:46.664760 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:32:46 crc kubenswrapper[4801]: E1124 21:32:46.665687 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:32:47 crc kubenswrapper[4801]: I1124 21:32:47.664880 4801 scope.go:117] "RemoveContainer" containerID="09ee968e165f5c61acf169e56bf3d17a2ada70c13c5ccf03ed554d9488393c9e" Nov 24 21:32:50 crc kubenswrapper[4801]: I1124 21:32:50.500666 4801 scope.go:117] "RemoveContainer" containerID="346b9a0152ef7b8919e434889260f27c4d277081bfce54e9c8e5e17cbb44dd55" Nov 24 21:32:50 crc kubenswrapper[4801]: I1124 21:32:50.521656 4801 generic.go:334] "Generic (PLEG): container finished" podID="af143054-b9a1-432a-a0f8-9f489550bd24" containerID="7fe476b866813afe8c14c393885c5c7f6a9df092d39f3acccf375873e74f50d1" exitCode=0 Nov 24 21:32:50 crc kubenswrapper[4801]: I1124 21:32:50.521794 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"af143054-b9a1-432a-a0f8-9f489550bd24","Type":"ContainerDied","Data":"7fe476b866813afe8c14c393885c5c7f6a9df092d39f3acccf375873e74f50d1"} Nov 24 21:32:51 crc kubenswrapper[4801]: I1124 21:32:51.572741 4801 generic.go:334] "Generic (PLEG): container finished" podID="052262eb-3362-4169-a9e2-96e364d20be8" containerID="fc1f3038cdaabe8ddb5a1dbea4535a51d079fafea1c55116464b23e1c2fa69e9" exitCode=0 Nov 24 21:32:51 crc kubenswrapper[4801]: I1124 21:32:51.572847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"052262eb-3362-4169-a9e2-96e364d20be8","Type":"ContainerDied","Data":"fc1f3038cdaabe8ddb5a1dbea4535a51d079fafea1c55116464b23e1c2fa69e9"} Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.200403 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zg46q"] Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.207042 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.211576 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zg46q"] Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.223981 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289027 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289232 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-config\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289347 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289464 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lwl\" (UniqueName: \"kubernetes.io/projected/42862bff-40e1-4b6f-af43-9a29ce8a8494-kube-api-access-k4lwl\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289713 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.289989 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393313 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lwl\" (UniqueName: \"kubernetes.io/projected/42862bff-40e1-4b6f-af43-9a29ce8a8494-kube-api-access-k4lwl\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393385 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393428 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393509 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393588 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393650 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-config\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.393677 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.394570 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.395460 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.396939 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.396941 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.397166 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-config\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.397483 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.418246 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lwl\" (UniqueName: \"kubernetes.io/projected/42862bff-40e1-4b6f-af43-9a29ce8a8494-kube-api-access-k4lwl\") pod \"dnsmasq-dns-7d84b4d45c-zg46q\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:54 crc kubenswrapper[4801]: I1124 21:32:54.546187 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:32:55 crc kubenswrapper[4801]: I1124 21:32:55.446218 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.271878 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.392174 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af143054-b9a1-432a-a0f8-9f489550bd24-pod-info\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.398299 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-tls\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.398452 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-erlang-cookie\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.398601 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-server-conf\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.398654 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bbmc\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-kube-api-access-8bbmc\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.400033 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-plugins\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.400110 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-confd\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.400239 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-config-data\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.401689 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-plugins-conf\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.401817 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af143054-b9a1-432a-a0f8-9f489550bd24-erlang-cookie-secret\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.401856 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"af143054-b9a1-432a-a0f8-9f489550bd24\" (UID: \"af143054-b9a1-432a-a0f8-9f489550bd24\") " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.404445 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.406172 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.407280 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.408927 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/af143054-b9a1-432a-a0f8-9f489550bd24-pod-info" (OuterVolumeSpecName: "pod-info") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.409142 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.410279 4801 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.410312 4801 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af143054-b9a1-432a-a0f8-9f489550bd24-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.410325 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.410340 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.410355 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.413202 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af143054-b9a1-432a-a0f8-9f489550bd24-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.414606 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.436261 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-kube-api-access-8bbmc" (OuterVolumeSpecName: "kube-api-access-8bbmc") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "kube-api-access-8bbmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.463595 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-config-data" (OuterVolumeSpecName: "config-data") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.503035 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-server-conf" (OuterVolumeSpecName: "server-conf") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.514192 4801 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.514244 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bbmc\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-kube-api-access-8bbmc\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.514257 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af143054-b9a1-432a-a0f8-9f489550bd24-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.514269 4801 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af143054-b9a1-432a-a0f8-9f489550bd24-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.514310 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.560110 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.597216 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "af143054-b9a1-432a-a0f8-9f489550bd24" (UID: "af143054-b9a1-432a-a0f8-9f489550bd24"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.617748 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af143054-b9a1-432a-a0f8-9f489550bd24-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.617789 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.695736 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"af143054-b9a1-432a-a0f8-9f489550bd24","Type":"ContainerDied","Data":"488474bfd44a78f699ce79584dfecc8b7e8a94969879cc9005adbf8855dbb746"} Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.695956 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.757582 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.787392 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.798777 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:32:57 crc kubenswrapper[4801]: E1124 21:32:57.799450 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="setup-container" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.799469 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="setup-container" Nov 24 21:32:57 crc kubenswrapper[4801]: E1124 21:32:57.799529 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.799536 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.799795 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.802497 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.871404 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950007 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950099 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxvm\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-kube-api-access-prxvm\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950150 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-server-conf\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950293 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950516 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950575 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950606 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950631 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.950805 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1881365b-14f3-4392-930e-8d054a993b96-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.951100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-config-data\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:57 crc kubenswrapper[4801]: I1124 21:32:57.951300 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1881365b-14f3-4392-930e-8d054a993b96-pod-info\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059267 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1881365b-14f3-4392-930e-8d054a993b96-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059352 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-config-data\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059419 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1881365b-14f3-4392-930e-8d054a993b96-pod-info\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059447 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059493 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxvm\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-kube-api-access-prxvm\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059543 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-server-conf\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059575 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059624 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059646 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.059710 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.060464 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.060648 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-config-data\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.061280 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-server-conf\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.063300 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1881365b-14f3-4392-930e-8d054a993b96-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.066061 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.076611 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.077178 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.079970 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1881365b-14f3-4392-930e-8d054a993b96-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.081971 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1881365b-14f3-4392-930e-8d054a993b96-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.109117 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxvm\" (UniqueName: \"kubernetes.io/projected/1881365b-14f3-4392-930e-8d054a993b96-kube-api-access-prxvm\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.132220 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1881365b-14f3-4392-930e-8d054a993b96-pod-info\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.170421 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-2\" (UID: \"1881365b-14f3-4392-930e-8d054a993b96\") " pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.175792 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.265707 4801 scope.go:117] "RemoveContainer" containerID="7178c953b2b95c6cc430e163016f1c4cea918beca04500bc91e7b61baec56d95" Nov 24 21:32:58 crc kubenswrapper[4801]: E1124 21:32:58.301872 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 24 21:32:58 crc kubenswrapper[4801]: E1124 21:32:58.301952 4801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 24 21:32:58 crc kubenswrapper[4801]: E1124 21:32:58.302118 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pnr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qsj4k_openstack(36e49661-1ce6-4040-a709-54ce907173d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:32:58 crc kubenswrapper[4801]: E1124 21:32:58.303647 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-qsj4k" podUID="36e49661-1ce6-4040-a709-54ce907173d5" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.445146 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.460919 4801 scope.go:117] "RemoveContainer" containerID="7fe476b866813afe8c14c393885c5c7f6a9df092d39f3acccf375873e74f50d1" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.555135 4801 scope.go:117] "RemoveContainer" containerID="b56a953951e0148a3ed7af819ebbd4de7c7816d9238af1bd0bd53a4ca0afb848" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.583311 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-erlang-cookie\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.583446 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/052262eb-3362-4169-a9e2-96e364d20be8-erlang-cookie-secret\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584201 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-plugins-conf\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584303 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-tls\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584421 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-config-data\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584567 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584625 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cxrm\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-kube-api-access-4cxrm\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584675 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-server-conf\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584710 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-confd\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584871 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-plugins\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.584900 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/052262eb-3362-4169-a9e2-96e364d20be8-pod-info\") pod \"052262eb-3362-4169-a9e2-96e364d20be8\" (UID: \"052262eb-3362-4169-a9e2-96e364d20be8\") " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.586546 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.587548 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.588695 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.591334 4801 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.591364 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.591404 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.593856 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/052262eb-3362-4169-a9e2-96e364d20be8-pod-info" (OuterVolumeSpecName: "pod-info") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.594182 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.595147 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.604097 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052262eb-3362-4169-a9e2-96e364d20be8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.610358 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-kube-api-access-4cxrm" (OuterVolumeSpecName: "kube-api-access-4cxrm") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "kube-api-access-4cxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.687468 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-server-conf" (OuterVolumeSpecName: "server-conf") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.693040 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" path="/var/lib/kubelet/pods/af143054-b9a1-432a-a0f8-9f489550bd24/volumes" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.694494 4801 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/052262eb-3362-4169-a9e2-96e364d20be8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.694522 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.694550 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.694562 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cxrm\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-kube-api-access-4cxrm\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.694576 4801 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.694585 4801 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/052262eb-3362-4169-a9e2-96e364d20be8-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.709215 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-config-data" (OuterVolumeSpecName: "config-data") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.735285 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.735572 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"052262eb-3362-4169-a9e2-96e364d20be8","Type":"ContainerDied","Data":"d95f1d7145b0da0abfff345a22a0d35cd42e537d13c7ae17a6ff6c0afb7436ac"} Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.735621 4801 scope.go:117] "RemoveContainer" containerID="fc1f3038cdaabe8ddb5a1dbea4535a51d079fafea1c55116464b23e1c2fa69e9" Nov 24 21:32:58 crc kubenswrapper[4801]: E1124 21:32:58.738934 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qsj4k" podUID="36e49661-1ce6-4040-a709-54ce907173d5" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.741657 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.772187 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "052262eb-3362-4169-a9e2-96e364d20be8" (UID: "052262eb-3362-4169-a9e2-96e364d20be8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.777309 4801 scope.go:117] "RemoveContainer" containerID="dcfc6398313ef14cefed31e1a783d37f942b5f8c01dcd5371474a4e544086dd1" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.797867 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052262eb-3362-4169-a9e2-96e364d20be8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.797901 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.797916 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/052262eb-3362-4169-a9e2-96e364d20be8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:58 crc kubenswrapper[4801]: I1124 21:32:58.896669 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.017571 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.107767 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.134137 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.146680 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:32:59 crc kubenswrapper[4801]: E1124 21:32:59.147859 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.148017 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" Nov 24 21:32:59 crc kubenswrapper[4801]: E1124 21:32:59.148085 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="setup-container" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.148160 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="setup-container" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.148519 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="052262eb-3362-4169-a9e2-96e364d20be8" containerName="rabbitmq" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.150185 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.153275 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.155128 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.155434 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.155795 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xptr6" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.155910 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.156014 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.157015 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.180379 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zg46q"] Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.180453 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342268 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342328 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342388 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342430 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342459 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342512 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342541 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342793 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.342836 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.343036 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktlm\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-kube-api-access-tktlm\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449350 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449390 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449454 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449471 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449541 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449569 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449589 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449607 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.449650 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktlm\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-kube-api-access-tktlm\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.451057 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.455932 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.456914 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.457538 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.458425 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.459714 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.464431 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.465469 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.466690 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.467320 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.469758 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktlm\" (UniqueName: \"kubernetes.io/projected/bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a-kube-api-access-tktlm\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.518260 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.664265 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:32:59 crc kubenswrapper[4801]: E1124 21:32:59.665472 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.755395 4801 generic.go:334] "Generic (PLEG): container finished" podID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerID="75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a" exitCode=0 Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.756414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" event={"ID":"42862bff-40e1-4b6f-af43-9a29ce8a8494","Type":"ContainerDied","Data":"75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a"} Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.756612 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" event={"ID":"42862bff-40e1-4b6f-af43-9a29ce8a8494","Type":"ContainerStarted","Data":"f014ab72da7bde0e6184fcd3cf99929b15c322a02e9b7af5a5037cc6bdccec40"} Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.761693 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.765976 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94ec5195-8cde-44c7-8d98-7bf19a4b20e0","Type":"ContainerStarted","Data":"efb78b898f41683be38e153d92899a3d8db0045a98b35d6a42a8534fa3a18f30"} Nov 24 21:32:59 crc kubenswrapper[4801]: I1124 21:32:59.768752 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"1881365b-14f3-4392-930e-8d054a993b96","Type":"ContainerStarted","Data":"60c1f77019eb58264e67d3fc15c930c4cd7af9768f45ad9916056999fb8cd577"} Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.078436 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="af143054-b9a1-432a-a0f8-9f489550bd24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: i/o timeout" Nov 24 21:33:00 crc kubenswrapper[4801]: W1124 21:33:00.335570 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb53adc7_d56d_4fd7_b9ca_1070c5bc5d9a.slice/crio-223171426d9b84a042c2f8596f1cac63b3d40392c2076390aa79ec607b5e4e24 WatchSource:0}: Error finding container 223171426d9b84a042c2f8596f1cac63b3d40392c2076390aa79ec607b5e4e24: Status 404 returned error can't find the container with id 223171426d9b84a042c2f8596f1cac63b3d40392c2076390aa79ec607b5e4e24 Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.337125 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.680193 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052262eb-3362-4169-a9e2-96e364d20be8" path="/var/lib/kubelet/pods/052262eb-3362-4169-a9e2-96e364d20be8/volumes" Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.783895 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" event={"ID":"42862bff-40e1-4b6f-af43-9a29ce8a8494","Type":"ContainerStarted","Data":"35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87"} Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.784015 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.786211 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a","Type":"ContainerStarted","Data":"223171426d9b84a042c2f8596f1cac63b3d40392c2076390aa79ec607b5e4e24"} Nov 24 21:33:00 crc kubenswrapper[4801]: I1124 21:33:00.820208 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" podStartSLOduration=6.820169428 podStartE2EDuration="6.820169428s" podCreationTimestamp="2025-11-24 21:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:33:00.808906329 +0000 UTC m=+1552.891493019" watchObservedRunningTime="2025-11-24 21:33:00.820169428 +0000 UTC m=+1552.902756118" Nov 24 21:33:01 crc kubenswrapper[4801]: I1124 21:33:01.800671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"1881365b-14f3-4392-930e-8d054a993b96","Type":"ContainerStarted","Data":"9163a17f1991d08b4b96c189eb158631cecee337cbcf21e63482d02acde83d06"} Nov 24 21:33:02 crc kubenswrapper[4801]: I1124 21:33:02.824629 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a","Type":"ContainerStarted","Data":"737aba54435cb635d05c89c8a9d053dcb0f4d3526d55635e7d10fd2abc3f8524"} Nov 24 21:33:04 crc kubenswrapper[4801]: I1124 21:33:04.863007 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94ec5195-8cde-44c7-8d98-7bf19a4b20e0","Type":"ContainerStarted","Data":"e642fb5b420a05633b4d1c3ad72d7d9e9d2ec3006535b236fd1ecc84a1b1e72f"} Nov 24 21:33:05 crc kubenswrapper[4801]: I1124 21:33:05.911409 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94ec5195-8cde-44c7-8d98-7bf19a4b20e0","Type":"ContainerStarted","Data":"1424dde566199d20f68316f98a39ab20a619084cbca9c3857fa3b0323750e62b"} Nov 24 21:33:06 crc kubenswrapper[4801]: I1124 21:33:06.929500 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94ec5195-8cde-44c7-8d98-7bf19a4b20e0","Type":"ContainerStarted","Data":"c16ab86c08fe26cd9728d02ac7c50764e294271b023354f1946712b234700f03"} Nov 24 21:33:08 crc kubenswrapper[4801]: I1124 21:33:08.965889 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94ec5195-8cde-44c7-8d98-7bf19a4b20e0","Type":"ContainerStarted","Data":"bb691fe1eb51e5f5780e4a07de924fb1f05ba9298c64bd9ca85b95eed2ec46d7"} Nov 24 21:33:08 crc kubenswrapper[4801]: I1124 21:33:08.967560 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.000293 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=17.201166247 podStartE2EDuration="26.000252448s" podCreationTimestamp="2025-11-24 21:32:43 +0000 UTC" firstStartedPulling="2025-11-24 21:32:58.901639249 +0000 UTC m=+1550.984225919" lastFinishedPulling="2025-11-24 21:33:07.70072542 +0000 UTC m=+1559.783312120" observedRunningTime="2025-11-24 21:33:08.992725655 +0000 UTC m=+1561.075312325" watchObservedRunningTime="2025-11-24 21:33:09.000252448 +0000 UTC m=+1561.082839148" Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.548549 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.656933 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-58sz2"] Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.657201 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="dnsmasq-dns" containerID="cri-o://52cf4297114bc46fda3a9c797eb7cc7ac3d5364790bb5e3ea655b416d367d0a1" gracePeriod=10 Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.865354 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-pbw5k"] Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.871342 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:09 crc kubenswrapper[4801]: I1124 21:33:09.923258 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-pbw5k"] Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.022532 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" event={"ID":"15a1163d-2c92-4175-abb9-ef08391fdf5c","Type":"ContainerDied","Data":"52cf4297114bc46fda3a9c797eb7cc7ac3d5364790bb5e3ea655b416d367d0a1"} Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.026720 4801 generic.go:334] "Generic (PLEG): container finished" podID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerID="52cf4297114bc46fda3a9c797eb7cc7ac3d5364790bb5e3ea655b416d367d0a1" exitCode=0 Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037639 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037777 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037800 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-config\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037839 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mwk\" (UniqueName: \"kubernetes.io/projected/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-kube-api-access-q6mwk\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.037911 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.140378 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.140620 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-config\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.140822 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.140988 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mwk\" (UniqueName: \"kubernetes.io/projected/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-kube-api-access-q6mwk\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.141185 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.141587 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-config\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.142198 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.142477 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.142784 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.143013 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.143823 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.144472 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.155482 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.169756 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mwk\" (UniqueName: \"kubernetes.io/projected/6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7-kube-api-access-q6mwk\") pod \"dnsmasq-dns-6f6df4f56c-pbw5k\" (UID: \"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.235649 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.437768 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.571584 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-svc\") pod \"15a1163d-2c92-4175-abb9-ef08391fdf5c\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.571779 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-sb\") pod \"15a1163d-2c92-4175-abb9-ef08391fdf5c\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.571912 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-config\") pod \"15a1163d-2c92-4175-abb9-ef08391fdf5c\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.572234 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2ncm\" (UniqueName: \"kubernetes.io/projected/15a1163d-2c92-4175-abb9-ef08391fdf5c-kube-api-access-k2ncm\") pod \"15a1163d-2c92-4175-abb9-ef08391fdf5c\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.576606 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-swift-storage-0\") pod \"15a1163d-2c92-4175-abb9-ef08391fdf5c\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.576704 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-nb\") pod \"15a1163d-2c92-4175-abb9-ef08391fdf5c\" (UID: \"15a1163d-2c92-4175-abb9-ef08391fdf5c\") " Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.593172 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a1163d-2c92-4175-abb9-ef08391fdf5c-kube-api-access-k2ncm" (OuterVolumeSpecName: "kube-api-access-k2ncm") pod "15a1163d-2c92-4175-abb9-ef08391fdf5c" (UID: "15a1163d-2c92-4175-abb9-ef08391fdf5c"). InnerVolumeSpecName "kube-api-access-k2ncm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.672671 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15a1163d-2c92-4175-abb9-ef08391fdf5c" (UID: "15a1163d-2c92-4175-abb9-ef08391fdf5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.683908 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2ncm\" (UniqueName: \"kubernetes.io/projected/15a1163d-2c92-4175-abb9-ef08391fdf5c-kube-api-access-k2ncm\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.683942 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.691611 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-config" (OuterVolumeSpecName: "config") pod "15a1163d-2c92-4175-abb9-ef08391fdf5c" (UID: "15a1163d-2c92-4175-abb9-ef08391fdf5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.715830 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "15a1163d-2c92-4175-abb9-ef08391fdf5c" (UID: "15a1163d-2c92-4175-abb9-ef08391fdf5c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.731066 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15a1163d-2c92-4175-abb9-ef08391fdf5c" (UID: "15a1163d-2c92-4175-abb9-ef08391fdf5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.744682 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15a1163d-2c92-4175-abb9-ef08391fdf5c" (UID: "15a1163d-2c92-4175-abb9-ef08391fdf5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.791492 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.791523 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.791534 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.791547 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1163d-2c92-4175-abb9-ef08391fdf5c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:10 crc kubenswrapper[4801]: I1124 21:33:10.832294 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-pbw5k"] Nov 24 21:33:10 crc kubenswrapper[4801]: W1124 21:33:10.841932 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c838ec2_ad0b_43d4_b4f1_8c897ce67ff7.slice/crio-eeaa83ca1cd879528053f4213094e6f7f4aea2b79e83e2dd6aab7831e64ca0c5 WatchSource:0}: Error finding container eeaa83ca1cd879528053f4213094e6f7f4aea2b79e83e2dd6aab7831e64ca0c5: Status 404 returned error can't find the container with id eeaa83ca1cd879528053f4213094e6f7f4aea2b79e83e2dd6aab7831e64ca0c5 Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.043421 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" event={"ID":"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7","Type":"ContainerStarted","Data":"eeaa83ca1cd879528053f4213094e6f7f4aea2b79e83e2dd6aab7831e64ca0c5"} Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.049493 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" event={"ID":"15a1163d-2c92-4175-abb9-ef08391fdf5c","Type":"ContainerDied","Data":"135b9886db0adaea4197d1469b6e18e5d6976eea096fbb3ba450235dedc8f627"} Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.049558 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.049578 4801 scope.go:117] "RemoveContainer" containerID="52cf4297114bc46fda3a9c797eb7cc7ac3d5364790bb5e3ea655b416d367d0a1" Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.083884 4801 scope.go:117] "RemoveContainer" containerID="8c5eba765539f9db5bedc07b3c65dd804dabe0db082e3aabdfca00f61ef5a009" Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.120660 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-58sz2"] Nov 24 21:33:11 crc kubenswrapper[4801]: I1124 21:33:11.144010 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-58sz2"] Nov 24 21:33:12 crc kubenswrapper[4801]: I1124 21:33:12.071578 4801 generic.go:334] "Generic (PLEG): container finished" podID="6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7" containerID="41886b422d018442bc19be951b44dadcbddb4221ebe274fcdf29ed3ae083e6b8" exitCode=0 Nov 24 21:33:12 crc kubenswrapper[4801]: I1124 21:33:12.071837 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" event={"ID":"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7","Type":"ContainerDied","Data":"41886b422d018442bc19be951b44dadcbddb4221ebe274fcdf29ed3ae083e6b8"} Nov 24 21:33:12 crc kubenswrapper[4801]: I1124 21:33:12.082349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qsj4k" event={"ID":"36e49661-1ce6-4040-a709-54ce907173d5","Type":"ContainerStarted","Data":"1d3a6bc4ad0f5eafb2ac66de5169de7fe6964a1d88aaf9dc37f274b1db661850"} Nov 24 21:33:12 crc kubenswrapper[4801]: I1124 21:33:12.141018 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qsj4k" podStartSLOduration=2.2337240879999998 podStartE2EDuration="37.140994048s" podCreationTimestamp="2025-11-24 21:32:35 +0000 UTC" firstStartedPulling="2025-11-24 21:32:35.965460769 +0000 UTC m=+1528.048047439" lastFinishedPulling="2025-11-24 21:33:10.872730719 +0000 UTC m=+1562.955317399" observedRunningTime="2025-11-24 21:33:12.13978726 +0000 UTC m=+1564.222373930" watchObservedRunningTime="2025-11-24 21:33:12.140994048 +0000 UTC m=+1564.223580738" Nov 24 21:33:12 crc kubenswrapper[4801]: I1124 21:33:12.678266 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" path="/var/lib/kubelet/pods/15a1163d-2c92-4175-abb9-ef08391fdf5c/volumes" Nov 24 21:33:13 crc kubenswrapper[4801]: I1124 21:33:13.105662 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" event={"ID":"6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7","Type":"ContainerStarted","Data":"addd55b670f5174d89dae48dfc602ae121a579140715af225832809ab90dfd88"} Nov 24 21:33:13 crc kubenswrapper[4801]: I1124 21:33:13.106032 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:13 crc kubenswrapper[4801]: I1124 21:33:13.152175 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" podStartSLOduration=4.152113665 podStartE2EDuration="4.152113665s" podCreationTimestamp="2025-11-24 21:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:33:13.136833331 +0000 UTC m=+1565.219420011" watchObservedRunningTime="2025-11-24 21:33:13.152113665 +0000 UTC m=+1565.234700355" Nov 24 21:33:13 crc kubenswrapper[4801]: I1124 21:33:13.664487 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:33:13 crc kubenswrapper[4801]: E1124 21:33:13.665056 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:33:14 crc kubenswrapper[4801]: I1124 21:33:14.126027 4801 generic.go:334] "Generic (PLEG): container finished" podID="36e49661-1ce6-4040-a709-54ce907173d5" containerID="1d3a6bc4ad0f5eafb2ac66de5169de7fe6964a1d88aaf9dc37f274b1db661850" exitCode=0 Nov 24 21:33:14 crc kubenswrapper[4801]: I1124 21:33:14.126148 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qsj4k" event={"ID":"36e49661-1ce6-4040-a709-54ce907173d5","Type":"ContainerDied","Data":"1d3a6bc4ad0f5eafb2ac66de5169de7fe6964a1d88aaf9dc37f274b1db661850"} Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.195027 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-58sz2" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.1:5353: i/o timeout" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.720914 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qsj4k" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.774415 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pnr4\" (UniqueName: \"kubernetes.io/projected/36e49661-1ce6-4040-a709-54ce907173d5-kube-api-access-7pnr4\") pod \"36e49661-1ce6-4040-a709-54ce907173d5\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.774658 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-config-data\") pod \"36e49661-1ce6-4040-a709-54ce907173d5\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.775058 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-combined-ca-bundle\") pod \"36e49661-1ce6-4040-a709-54ce907173d5\" (UID: \"36e49661-1ce6-4040-a709-54ce907173d5\") " Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.787573 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e49661-1ce6-4040-a709-54ce907173d5-kube-api-access-7pnr4" (OuterVolumeSpecName: "kube-api-access-7pnr4") pod "36e49661-1ce6-4040-a709-54ce907173d5" (UID: "36e49661-1ce6-4040-a709-54ce907173d5"). InnerVolumeSpecName "kube-api-access-7pnr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.860205 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36e49661-1ce6-4040-a709-54ce907173d5" (UID: "36e49661-1ce6-4040-a709-54ce907173d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.879111 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.879146 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pnr4\" (UniqueName: \"kubernetes.io/projected/36e49661-1ce6-4040-a709-54ce907173d5-kube-api-access-7pnr4\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.922695 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-config-data" (OuterVolumeSpecName: "config-data") pod "36e49661-1ce6-4040-a709-54ce907173d5" (UID: "36e49661-1ce6-4040-a709-54ce907173d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:15 crc kubenswrapper[4801]: I1124 21:33:15.982743 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e49661-1ce6-4040-a709-54ce907173d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:16 crc kubenswrapper[4801]: I1124 21:33:16.163719 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qsj4k" event={"ID":"36e49661-1ce6-4040-a709-54ce907173d5","Type":"ContainerDied","Data":"ac1ece7c4c55a775633495d820abc71098011c78ad572d2740940fd02c6f91a7"} Nov 24 21:33:16 crc kubenswrapper[4801]: I1124 21:33:16.163775 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1ece7c4c55a775633495d820abc71098011c78ad572d2740940fd02c6f91a7" Nov 24 21:33:16 crc kubenswrapper[4801]: I1124 21:33:16.164088 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qsj4k" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.139944 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5f86b8d6b6-l58mw"] Nov 24 21:33:17 crc kubenswrapper[4801]: E1124 21:33:17.141071 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="dnsmasq-dns" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.141093 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="dnsmasq-dns" Nov 24 21:33:17 crc kubenswrapper[4801]: E1124 21:33:17.141140 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="init" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.141149 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="init" Nov 24 21:33:17 crc kubenswrapper[4801]: E1124 21:33:17.141185 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e49661-1ce6-4040-a709-54ce907173d5" containerName="heat-db-sync" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.141195 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e49661-1ce6-4040-a709-54ce907173d5" containerName="heat-db-sync" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.141538 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a1163d-2c92-4175-abb9-ef08391fdf5c" containerName="dnsmasq-dns" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.141586 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e49661-1ce6-4040-a709-54ce907173d5" containerName="heat-db-sync" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.142830 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.161566 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f86b8d6b6-l58mw"] Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.220731 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-combined-ca-bundle\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.221130 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb94\" (UniqueName: \"kubernetes.io/projected/ea436a56-ba1e-4897-8f60-6811f68c4754-kube-api-access-crb94\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.221234 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-config-data\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.221478 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-config-data-custom\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.238244 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cd576cf44-85blw"] Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.240940 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.258580 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cd576cf44-85blw"] Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.280489 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-69677c9fd-fttvq"] Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.282739 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.307674 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69677c9fd-fttvq"] Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325545 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-public-tls-certs\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325639 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ml6x\" (UniqueName: \"kubernetes.io/projected/5bfdd410-6e0a-42f8-8317-eaee1161c428-kube-api-access-4ml6x\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325702 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-combined-ca-bundle\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325774 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-internal-tls-certs\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325819 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-combined-ca-bundle\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325852 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb94\" (UniqueName: \"kubernetes.io/projected/ea436a56-ba1e-4897-8f60-6811f68c4754-kube-api-access-crb94\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325886 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-config-data\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325916 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-config-data-custom\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.325979 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-public-tls-certs\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-internal-tls-certs\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326084 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-combined-ca-bundle\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326114 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-config-data-custom\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326151 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-config-data\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326180 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxl22\" (UniqueName: \"kubernetes.io/projected/06168c63-0386-4a8b-b1c5-455f0efc4ebf-kube-api-access-kxl22\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326249 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-config-data\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.326311 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-config-data-custom\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.341244 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-config-data-custom\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.346337 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-combined-ca-bundle\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.348606 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea436a56-ba1e-4897-8f60-6811f68c4754-config-data\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.372060 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb94\" (UniqueName: \"kubernetes.io/projected/ea436a56-ba1e-4897-8f60-6811f68c4754-kube-api-access-crb94\") pod \"heat-engine-5f86b8d6b6-l58mw\" (UID: \"ea436a56-ba1e-4897-8f60-6811f68c4754\") " pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.428900 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-config-data\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429037 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-public-tls-certs\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429076 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ml6x\" (UniqueName: \"kubernetes.io/projected/5bfdd410-6e0a-42f8-8317-eaee1161c428-kube-api-access-4ml6x\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429137 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-internal-tls-certs\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-combined-ca-bundle\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429189 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-config-data-custom\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429227 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-public-tls-certs\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-internal-tls-certs\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429299 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-combined-ca-bundle\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429322 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-config-data-custom\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429350 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-config-data\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.429378 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxl22\" (UniqueName: \"kubernetes.io/projected/06168c63-0386-4a8b-b1c5-455f0efc4ebf-kube-api-access-kxl22\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.435281 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-combined-ca-bundle\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.446700 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-config-data\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.449505 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-combined-ca-bundle\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.450696 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-config-data-custom\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.451426 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-public-tls-certs\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.453130 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-internal-tls-certs\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.453157 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-config-data-custom\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.455050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfdd410-6e0a-42f8-8317-eaee1161c428-internal-tls-certs\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.455353 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-public-tls-certs\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.467871 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06168c63-0386-4a8b-b1c5-455f0efc4ebf-config-data\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.477303 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxl22\" (UniqueName: \"kubernetes.io/projected/06168c63-0386-4a8b-b1c5-455f0efc4ebf-kube-api-access-kxl22\") pod \"heat-api-cd576cf44-85blw\" (UID: \"06168c63-0386-4a8b-b1c5-455f0efc4ebf\") " pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.483597 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.487191 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ml6x\" (UniqueName: \"kubernetes.io/projected/5bfdd410-6e0a-42f8-8317-eaee1161c428-kube-api-access-4ml6x\") pod \"heat-cfnapi-69677c9fd-fttvq\" (UID: \"5bfdd410-6e0a-42f8-8317-eaee1161c428\") " pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.577268 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:17 crc kubenswrapper[4801]: I1124 21:33:17.617713 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:18 crc kubenswrapper[4801]: I1124 21:33:18.201485 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69677c9fd-fttvq"] Nov 24 21:33:18 crc kubenswrapper[4801]: I1124 21:33:18.218265 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69677c9fd-fttvq" event={"ID":"5bfdd410-6e0a-42f8-8317-eaee1161c428","Type":"ContainerStarted","Data":"14d11e73c3e1540bebfa9250ce19a7022fade406fc915d5f8974b488ce0ff2fd"} Nov 24 21:33:18 crc kubenswrapper[4801]: I1124 21:33:18.225079 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f86b8d6b6-l58mw"] Nov 24 21:33:18 crc kubenswrapper[4801]: W1124 21:33:18.228201 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06168c63_0386_4a8b_b1c5_455f0efc4ebf.slice/crio-98a11bb87ce31dfaa1739c104c750c7e7f7a0aa0dfbf78af254d2a4a962805f8 WatchSource:0}: Error finding container 98a11bb87ce31dfaa1739c104c750c7e7f7a0aa0dfbf78af254d2a4a962805f8: Status 404 returned error can't find the container with id 98a11bb87ce31dfaa1739c104c750c7e7f7a0aa0dfbf78af254d2a4a962805f8 Nov 24 21:33:18 crc kubenswrapper[4801]: W1124 21:33:18.228771 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea436a56_ba1e_4897_8f60_6811f68c4754.slice/crio-e4deea39e3d1504836b14508911c9b3376d94337cebdb6cc4c42ef49262edfe5 WatchSource:0}: Error finding container e4deea39e3d1504836b14508911c9b3376d94337cebdb6cc4c42ef49262edfe5: Status 404 returned error can't find the container with id e4deea39e3d1504836b14508911c9b3376d94337cebdb6cc4c42ef49262edfe5 Nov 24 21:33:18 crc kubenswrapper[4801]: I1124 21:33:18.244027 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cd576cf44-85blw"] Nov 24 21:33:19 crc kubenswrapper[4801]: I1124 21:33:19.240280 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cd576cf44-85blw" event={"ID":"06168c63-0386-4a8b-b1c5-455f0efc4ebf","Type":"ContainerStarted","Data":"98a11bb87ce31dfaa1739c104c750c7e7f7a0aa0dfbf78af254d2a4a962805f8"} Nov 24 21:33:19 crc kubenswrapper[4801]: I1124 21:33:19.253670 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f86b8d6b6-l58mw" event={"ID":"ea436a56-ba1e-4897-8f60-6811f68c4754","Type":"ContainerStarted","Data":"b7f587b5788fd96e2dc0ae4366b32a851d66d1388879465305b033dd2b8d9611"} Nov 24 21:33:19 crc kubenswrapper[4801]: I1124 21:33:19.253729 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f86b8d6b6-l58mw" event={"ID":"ea436a56-ba1e-4897-8f60-6811f68c4754","Type":"ContainerStarted","Data":"e4deea39e3d1504836b14508911c9b3376d94337cebdb6cc4c42ef49262edfe5"} Nov 24 21:33:19 crc kubenswrapper[4801]: I1124 21:33:19.255493 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:19 crc kubenswrapper[4801]: I1124 21:33:19.286672 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5f86b8d6b6-l58mw" podStartSLOduration=2.286642328 podStartE2EDuration="2.286642328s" podCreationTimestamp="2025-11-24 21:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:33:19.273651794 +0000 UTC m=+1571.356238464" watchObservedRunningTime="2025-11-24 21:33:19.286642328 +0000 UTC m=+1571.369229038" Nov 24 21:33:20 crc kubenswrapper[4801]: I1124 21:33:20.238892 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-pbw5k" Nov 24 21:33:20 crc kubenswrapper[4801]: I1124 21:33:20.365008 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zg46q"] Nov 24 21:33:20 crc kubenswrapper[4801]: I1124 21:33:20.377640 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerName="dnsmasq-dns" containerID="cri-o://35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87" gracePeriod=10 Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.145423 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.292635 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-nb\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.292733 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-sb\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.292775 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-swift-storage-0\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.293411 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-config\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.293516 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-svc\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.293739 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-openstack-edpm-ipam\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.293898 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4lwl\" (UniqueName: \"kubernetes.io/projected/42862bff-40e1-4b6f-af43-9a29ce8a8494-kube-api-access-k4lwl\") pod \"42862bff-40e1-4b6f-af43-9a29ce8a8494\" (UID: \"42862bff-40e1-4b6f-af43-9a29ce8a8494\") " Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.296901 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69677c9fd-fttvq" event={"ID":"5bfdd410-6e0a-42f8-8317-eaee1161c428","Type":"ContainerStarted","Data":"b0c350b749b9e80b0de3d4084871a1a05649766346fa0b8784eda582e325642c"} Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.298628 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.301435 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cd576cf44-85blw" event={"ID":"06168c63-0386-4a8b-b1c5-455f0efc4ebf","Type":"ContainerStarted","Data":"ec32ba4805c90f63c6026eba1481e49d0c0026443c6bc2579cfcabba1143d05d"} Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.302020 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.303942 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.303968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" event={"ID":"42862bff-40e1-4b6f-af43-9a29ce8a8494","Type":"ContainerDied","Data":"35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87"} Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.304022 4801 scope.go:117] "RemoveContainer" containerID="35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.304060 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42862bff-40e1-4b6f-af43-9a29ce8a8494-kube-api-access-k4lwl" (OuterVolumeSpecName: "kube-api-access-k4lwl") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "kube-api-access-k4lwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.303878 4801 generic.go:334] "Generic (PLEG): container finished" podID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerID="35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87" exitCode=0 Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.304336 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zg46q" event={"ID":"42862bff-40e1-4b6f-af43-9a29ce8a8494","Type":"ContainerDied","Data":"f014ab72da7bde0e6184fcd3cf99929b15c322a02e9b7af5a5037cc6bdccec40"} Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.358768 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-69677c9fd-fttvq" podStartSLOduration=2.661119116 podStartE2EDuration="4.358741906s" podCreationTimestamp="2025-11-24 21:33:17 +0000 UTC" firstStartedPulling="2025-11-24 21:33:18.181056092 +0000 UTC m=+1570.263642762" lastFinishedPulling="2025-11-24 21:33:19.878678882 +0000 UTC m=+1571.961265552" observedRunningTime="2025-11-24 21:33:21.349997816 +0000 UTC m=+1573.432584496" watchObservedRunningTime="2025-11-24 21:33:21.358741906 +0000 UTC m=+1573.441328576" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.399520 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cd576cf44-85blw" podStartSLOduration=2.749872129 podStartE2EDuration="4.39948168s" podCreationTimestamp="2025-11-24 21:33:17 +0000 UTC" firstStartedPulling="2025-11-24 21:33:18.234453057 +0000 UTC m=+1570.317039727" lastFinishedPulling="2025-11-24 21:33:19.884062608 +0000 UTC m=+1571.966649278" observedRunningTime="2025-11-24 21:33:21.376083474 +0000 UTC m=+1573.458670144" watchObservedRunningTime="2025-11-24 21:33:21.39948168 +0000 UTC m=+1573.482068370" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.420120 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4lwl\" (UniqueName: \"kubernetes.io/projected/42862bff-40e1-4b6f-af43-9a29ce8a8494-kube-api-access-k4lwl\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.465060 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.465270 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.473030 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.485460 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-config" (OuterVolumeSpecName: "config") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.501554 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.517852 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "42862bff-40e1-4b6f-af43-9a29ce8a8494" (UID: "42862bff-40e1-4b6f-af43-9a29ce8a8494"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.526852 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.526958 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.527017 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.527030 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.527040 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.527068 4801 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42862bff-40e1-4b6f-af43-9a29ce8a8494-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.611937 4801 scope.go:117] "RemoveContainer" containerID="75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.657278 4801 scope.go:117] "RemoveContainer" containerID="35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87" Nov 24 21:33:21 crc kubenswrapper[4801]: E1124 21:33:21.658043 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87\": container with ID starting with 35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87 not found: ID does not exist" containerID="35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.658104 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87"} err="failed to get container status \"35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87\": rpc error: code = NotFound desc = could not find container \"35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87\": container with ID starting with 35d1ae0bfbae80d9f5056bebd32c6becf3c41a81cc8cf7f6cfe2727ba6fbef87 not found: ID does not exist" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.658144 4801 scope.go:117] "RemoveContainer" containerID="75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a" Nov 24 21:33:21 crc kubenswrapper[4801]: E1124 21:33:21.658846 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a\": container with ID starting with 75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a not found: ID does not exist" containerID="75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.658912 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a"} err="failed to get container status \"75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a\": rpc error: code = NotFound desc = could not find container \"75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a\": container with ID starting with 75c41c8b3f3fb7e92c2a6d24b0eb6e75094b79bf5e2d8349e8230a8797cdb86a not found: ID does not exist" Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.667657 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zg46q"] Nov 24 21:33:21 crc kubenswrapper[4801]: I1124 21:33:21.685079 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zg46q"] Nov 24 21:33:22 crc kubenswrapper[4801]: I1124 21:33:22.702145 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" path="/var/lib/kubelet/pods/42862bff-40e1-4b6f-af43-9a29ce8a8494/volumes" Nov 24 21:33:24 crc kubenswrapper[4801]: I1124 21:33:24.665757 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:33:24 crc kubenswrapper[4801]: E1124 21:33:24.666793 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:33:29 crc kubenswrapper[4801]: I1124 21:33:29.146106 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-cd576cf44-85blw" Nov 24 21:33:29 crc kubenswrapper[4801]: I1124 21:33:29.161943 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-69677c9fd-fttvq" Nov 24 21:33:29 crc kubenswrapper[4801]: I1124 21:33:29.256234 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66f65cf495-v7bwm"] Nov 24 21:33:29 crc kubenswrapper[4801]: I1124 21:33:29.257098 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-66f65cf495-v7bwm" podUID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" containerName="heat-api" containerID="cri-o://46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496" gracePeriod=60 Nov 24 21:33:29 crc kubenswrapper[4801]: I1124 21:33:29.267261 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7777df5b9c-9zr67"] Nov 24 21:33:29 crc kubenswrapper[4801]: I1124 21:33:29.267598 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" podUID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" containerName="heat-cfnapi" containerID="cri-o://bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874" gracePeriod=60 Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.202874 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx"] Nov 24 21:33:30 crc kubenswrapper[4801]: E1124 21:33:30.203498 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerName="init" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.203514 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerName="init" Nov 24 21:33:30 crc kubenswrapper[4801]: E1124 21:33:30.203576 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerName="dnsmasq-dns" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.203582 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerName="dnsmasq-dns" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.203859 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="42862bff-40e1-4b6f-af43-9a29ce8a8494" containerName="dnsmasq-dns" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.205048 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.207558 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.209543 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.209872 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.210130 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.230108 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx"] Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.252614 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.252764 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.253313 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxqx\" (UniqueName: \"kubernetes.io/projected/be732949-920b-4e0c-ac7e-773a6983a64b-kube-api-access-chxqx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.253396 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.357854 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.357998 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.358052 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chxqx\" (UniqueName: \"kubernetes.io/projected/be732949-920b-4e0c-ac7e-773a6983a64b-kube-api-access-chxqx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.358134 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.367234 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.367961 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.368050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.384133 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chxqx\" (UniqueName: \"kubernetes.io/projected/be732949-920b-4e0c-ac7e-773a6983a64b-kube-api-access-chxqx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:30 crc kubenswrapper[4801]: I1124 21:33:30.531304 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:31 crc kubenswrapper[4801]: I1124 21:33:31.668069 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx"] Nov 24 21:33:31 crc kubenswrapper[4801]: W1124 21:33:31.677800 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe732949_920b_4e0c_ac7e_773a6983a64b.slice/crio-dad10e9dc56e34fe25e14cdbc4bde5f9f88a404453db75539ec2ddd3f44922ed WatchSource:0}: Error finding container dad10e9dc56e34fe25e14cdbc4bde5f9f88a404453db75539ec2ddd3f44922ed: Status 404 returned error can't find the container with id dad10e9dc56e34fe25e14cdbc4bde5f9f88a404453db75539ec2ddd3f44922ed Nov 24 21:33:32 crc kubenswrapper[4801]: I1124 21:33:32.450019 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" podUID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.228:8000/healthcheck\": read tcp 10.217.0.2:45754->10.217.0.228:8000: read: connection reset by peer" Nov 24 21:33:32 crc kubenswrapper[4801]: I1124 21:33:32.459590 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-66f65cf495-v7bwm" podUID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.229:8004/healthcheck\": read tcp 10.217.0.2:33428->10.217.0.229:8004: read: connection reset by peer" Nov 24 21:33:32 crc kubenswrapper[4801]: I1124 21:33:32.468286 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" event={"ID":"be732949-920b-4e0c-ac7e-773a6983a64b","Type":"ContainerStarted","Data":"dad10e9dc56e34fe25e14cdbc4bde5f9f88a404453db75539ec2ddd3f44922ed"} Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.119218 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.132414 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.256575 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-combined-ca-bundle\") pod \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.256647 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-combined-ca-bundle\") pod \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.256728 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-internal-tls-certs\") pod \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.256806 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnv28\" (UniqueName: \"kubernetes.io/projected/6cd0ff51-4f1c-429e-9f84-e71c784a221a-kube-api-access-rnv28\") pod \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.256900 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data-custom\") pod \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.256940 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-public-tls-certs\") pod \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.257010 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-public-tls-certs\") pod \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.257101 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data-custom\") pod \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.257186 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data\") pod \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.257256 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vt9d\" (UniqueName: \"kubernetes.io/projected/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-kube-api-access-2vt9d\") pod \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\" (UID: \"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.257295 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-internal-tls-certs\") pod \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.257410 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data\") pod \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\" (UID: \"6cd0ff51-4f1c-429e-9f84-e71c784a221a\") " Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.272411 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd0ff51-4f1c-429e-9f84-e71c784a221a-kube-api-access-rnv28" (OuterVolumeSpecName: "kube-api-access-rnv28") pod "6cd0ff51-4f1c-429e-9f84-e71c784a221a" (UID: "6cd0ff51-4f1c-429e-9f84-e71c784a221a"). InnerVolumeSpecName "kube-api-access-rnv28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.289789 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-kube-api-access-2vt9d" (OuterVolumeSpecName: "kube-api-access-2vt9d") pod "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" (UID: "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a"). InnerVolumeSpecName "kube-api-access-2vt9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.290092 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6cd0ff51-4f1c-429e-9f84-e71c784a221a" (UID: "6cd0ff51-4f1c-429e-9f84-e71c784a221a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.290704 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" (UID: "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.337157 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cd0ff51-4f1c-429e-9f84-e71c784a221a" (UID: "6cd0ff51-4f1c-429e-9f84-e71c784a221a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.346418 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data" (OuterVolumeSpecName: "config-data") pod "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" (UID: "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.353678 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" (UID: "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361402 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361449 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361464 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnv28\" (UniqueName: \"kubernetes.io/projected/6cd0ff51-4f1c-429e-9f84-e71c784a221a-kube-api-access-rnv28\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361482 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361495 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361509 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.361524 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vt9d\" (UniqueName: \"kubernetes.io/projected/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-kube-api-access-2vt9d\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.366300 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" (UID: "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.367747 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" (UID: "5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.370252 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data" (OuterVolumeSpecName: "config-data") pod "6cd0ff51-4f1c-429e-9f84-e71c784a221a" (UID: "6cd0ff51-4f1c-429e-9f84-e71c784a221a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.386535 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6cd0ff51-4f1c-429e-9f84-e71c784a221a" (UID: "6cd0ff51-4f1c-429e-9f84-e71c784a221a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.398980 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6cd0ff51-4f1c-429e-9f84-e71c784a221a" (UID: "6cd0ff51-4f1c-429e-9f84-e71c784a221a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.465037 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.465073 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.465087 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.465095 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.465104 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cd0ff51-4f1c-429e-9f84-e71c784a221a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.484714 4801 generic.go:334] "Generic (PLEG): container finished" podID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" containerID="46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496" exitCode=0 Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.484824 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66f65cf495-v7bwm" event={"ID":"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a","Type":"ContainerDied","Data":"46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496"} Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.484871 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66f65cf495-v7bwm" event={"ID":"5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a","Type":"ContainerDied","Data":"ab35bff3f4c7c56045d3139b1462636057b7d38bcfa33f1cf4ec68c7448ec3e0"} Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.484868 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66f65cf495-v7bwm" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.484905 4801 scope.go:117] "RemoveContainer" containerID="46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.488614 4801 generic.go:334] "Generic (PLEG): container finished" podID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" containerID="bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874" exitCode=0 Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.488685 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" event={"ID":"6cd0ff51-4f1c-429e-9f84-e71c784a221a","Type":"ContainerDied","Data":"bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874"} Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.488734 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" event={"ID":"6cd0ff51-4f1c-429e-9f84-e71c784a221a","Type":"ContainerDied","Data":"fc096e8ca83e0d0cacf04f21a4a7f8b4454fedf673b0d5fda33480367e385c95"} Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.488837 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7777df5b9c-9zr67" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.535189 4801 scope.go:117] "RemoveContainer" containerID="46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496" Nov 24 21:33:33 crc kubenswrapper[4801]: E1124 21:33:33.536094 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496\": container with ID starting with 46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496 not found: ID does not exist" containerID="46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.536158 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496"} err="failed to get container status \"46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496\": rpc error: code = NotFound desc = could not find container \"46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496\": container with ID starting with 46cce57f14f97326c239989391fa858311aa02ee128e739efe6b3797c9d2e496 not found: ID does not exist" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.536204 4801 scope.go:117] "RemoveContainer" containerID="bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.587650 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66f65cf495-v7bwm"] Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.610648 4801 scope.go:117] "RemoveContainer" containerID="bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874" Nov 24 21:33:33 crc kubenswrapper[4801]: E1124 21:33:33.612174 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874\": container with ID starting with bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874 not found: ID does not exist" containerID="bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.612227 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874"} err="failed to get container status \"bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874\": rpc error: code = NotFound desc = could not find container \"bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874\": container with ID starting with bb6eda7fdbd936c16c0f209fd9f2643638a998238956c5f80f39f9d64ff2e874 not found: ID does not exist" Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.632424 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-66f65cf495-v7bwm"] Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.680429 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7777df5b9c-9zr67"] Nov 24 21:33:33 crc kubenswrapper[4801]: I1124 21:33:33.725214 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7777df5b9c-9zr67"] Nov 24 21:33:34 crc kubenswrapper[4801]: I1124 21:33:34.683314 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" path="/var/lib/kubelet/pods/5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a/volumes" Nov 24 21:33:34 crc kubenswrapper[4801]: I1124 21:33:34.685502 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" path="/var/lib/kubelet/pods/6cd0ff51-4f1c-429e-9f84-e71c784a221a/volumes" Nov 24 21:33:35 crc kubenswrapper[4801]: I1124 21:33:35.524857 4801 generic.go:334] "Generic (PLEG): container finished" podID="bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a" containerID="737aba54435cb635d05c89c8a9d053dcb0f4d3526d55635e7d10fd2abc3f8524" exitCode=0 Nov 24 21:33:35 crc kubenswrapper[4801]: I1124 21:33:35.525085 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a","Type":"ContainerDied","Data":"737aba54435cb635d05c89c8a9d053dcb0f4d3526d55635e7d10fd2abc3f8524"} Nov 24 21:33:35 crc kubenswrapper[4801]: I1124 21:33:35.529747 4801 generic.go:334] "Generic (PLEG): container finished" podID="1881365b-14f3-4392-930e-8d054a993b96" containerID="9163a17f1991d08b4b96c189eb158631cecee337cbcf21e63482d02acde83d06" exitCode=0 Nov 24 21:33:35 crc kubenswrapper[4801]: I1124 21:33:35.529800 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"1881365b-14f3-4392-930e-8d054a993b96","Type":"ContainerDied","Data":"9163a17f1991d08b4b96c189eb158631cecee337cbcf21e63482d02acde83d06"} Nov 24 21:33:36 crc kubenswrapper[4801]: I1124 21:33:36.550298 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a","Type":"ContainerStarted","Data":"3d77937c881de1ba7a3236c316b8c552614cccf5fb384481943370cd5920fc8b"} Nov 24 21:33:36 crc kubenswrapper[4801]: I1124 21:33:36.552863 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"1881365b-14f3-4392-930e-8d054a993b96","Type":"ContainerStarted","Data":"5c1f1d6360819edd1bab414140940ee82c91e479e90f670582e9da9505e03426"} Nov 24 21:33:36 crc kubenswrapper[4801]: I1124 21:33:36.552950 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:33:36 crc kubenswrapper[4801]: I1124 21:33:36.553450 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Nov 24 21:33:36 crc kubenswrapper[4801]: I1124 21:33:36.576873 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.576852841 podStartE2EDuration="37.576852841s" podCreationTimestamp="2025-11-24 21:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:33:36.574766826 +0000 UTC m=+1588.657353496" watchObservedRunningTime="2025-11-24 21:33:36.576852841 +0000 UTC m=+1588.659439511" Nov 24 21:33:36 crc kubenswrapper[4801]: I1124 21:33:36.621421 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=39.621395612 podStartE2EDuration="39.621395612s" podCreationTimestamp="2025-11-24 21:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:33:36.60228561 +0000 UTC m=+1588.684872290" watchObservedRunningTime="2025-11-24 21:33:36.621395612 +0000 UTC m=+1588.703982282" Nov 24 21:33:37 crc kubenswrapper[4801]: I1124 21:33:37.545408 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5f86b8d6b6-l58mw" Nov 24 21:33:37 crc kubenswrapper[4801]: I1124 21:33:37.609001 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-c45b8fb99-cff82"] Nov 24 21:33:37 crc kubenswrapper[4801]: I1124 21:33:37.609283 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-c45b8fb99-cff82" podUID="b2db7465-73a5-4b4c-97ac-416d05659962" containerName="heat-engine" containerID="cri-o://6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687" gracePeriod=60 Nov 24 21:33:38 crc kubenswrapper[4801]: I1124 21:33:38.675614 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:33:38 crc kubenswrapper[4801]: E1124 21:33:38.676224 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:33:39 crc kubenswrapper[4801]: I1124 21:33:39.470839 4801 scope.go:117] "RemoveContainer" containerID="bad48caa1251cb6a2598b5993ff76b256c0d500f40c1c3be4de4489eb6b0bc70" Nov 24 21:33:42 crc kubenswrapper[4801]: E1124 21:33:42.053720 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:33:42 crc kubenswrapper[4801]: E1124 21:33:42.055751 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:33:42 crc kubenswrapper[4801]: E1124 21:33:42.058151 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:33:42 crc kubenswrapper[4801]: E1124 21:33:42.058183 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-c45b8fb99-cff82" podUID="b2db7465-73a5-4b4c-97ac-416d05659962" containerName="heat-engine" Nov 24 21:33:44 crc kubenswrapper[4801]: I1124 21:33:44.028790 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.130210 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-qszd4"] Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.154778 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-qszd4"] Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.227510 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-8bbvs"] Nov 24 21:33:45 crc kubenswrapper[4801]: E1124 21:33:45.228430 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" containerName="heat-api" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.228448 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" containerName="heat-api" Nov 24 21:33:45 crc kubenswrapper[4801]: E1124 21:33:45.228477 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" containerName="heat-cfnapi" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.228485 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" containerName="heat-cfnapi" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.228779 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd0ff51-4f1c-429e-9f84-e71c784a221a" containerName="heat-cfnapi" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.228810 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5deee3e0-40bc-4bfb-b7d1-b1386ec79e5a" containerName="heat-api" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.231679 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.235244 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.250519 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8bbvs"] Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.305178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-config-data\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.305306 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-combined-ca-bundle\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.305473 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tttt6\" (UniqueName: \"kubernetes.io/projected/041806f3-13fb-43cc-98c2-180c16b5e3ea-kube-api-access-tttt6\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.305520 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-scripts\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.408497 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-config-data\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.408561 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-combined-ca-bundle\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.408648 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tttt6\" (UniqueName: \"kubernetes.io/projected/041806f3-13fb-43cc-98c2-180c16b5e3ea-kube-api-access-tttt6\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.408678 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-scripts\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.414021 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-scripts\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.415376 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-config-data\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.415751 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-combined-ca-bundle\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.450225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tttt6\" (UniqueName: \"kubernetes.io/projected/041806f3-13fb-43cc-98c2-180c16b5e3ea-kube-api-access-tttt6\") pod \"aodh-db-sync-8bbvs\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.573419 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.733883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" event={"ID":"be732949-920b-4e0c-ac7e-773a6983a64b","Type":"ContainerStarted","Data":"360388423add941aa8baddcb448584575b921f034592bf2fd048b5b4e98bf5c2"} Nov 24 21:33:45 crc kubenswrapper[4801]: I1124 21:33:45.800701 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" podStartSLOduration=2.455371346 podStartE2EDuration="15.800666359s" podCreationTimestamp="2025-11-24 21:33:30 +0000 UTC" firstStartedPulling="2025-11-24 21:33:31.681843935 +0000 UTC m=+1583.764430605" lastFinishedPulling="2025-11-24 21:33:45.027138948 +0000 UTC m=+1597.109725618" observedRunningTime="2025-11-24 21:33:45.773540338 +0000 UTC m=+1597.856127008" watchObservedRunningTime="2025-11-24 21:33:45.800666359 +0000 UTC m=+1597.883253029" Nov 24 21:33:46 crc kubenswrapper[4801]: W1124 21:33:46.177175 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041806f3_13fb_43cc_98c2_180c16b5e3ea.slice/crio-1414e97b602b04129f19c8927dce11c7944c7454860c967cd3488f1c62da037a WatchSource:0}: Error finding container 1414e97b602b04129f19c8927dce11c7944c7454860c967cd3488f1c62da037a: Status 404 returned error can't find the container with id 1414e97b602b04129f19c8927dce11c7944c7454860c967cd3488f1c62da037a Nov 24 21:33:46 crc kubenswrapper[4801]: I1124 21:33:46.181988 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8bbvs"] Nov 24 21:33:46 crc kubenswrapper[4801]: I1124 21:33:46.683204 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4ba8f9-6e5a-4073-b0f6-9311fb55d725" path="/var/lib/kubelet/pods/9e4ba8f9-6e5a-4073-b0f6-9311fb55d725/volumes" Nov 24 21:33:46 crc kubenswrapper[4801]: I1124 21:33:46.758642 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8bbvs" event={"ID":"041806f3-13fb-43cc-98c2-180c16b5e3ea","Type":"ContainerStarted","Data":"1414e97b602b04129f19c8927dce11c7944c7454860c967cd3488f1c62da037a"} Nov 24 21:33:48 crc kubenswrapper[4801]: I1124 21:33:48.178894 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="1881365b-14f3-4392-930e-8d054a993b96" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.16:5671: connect: connection refused" Nov 24 21:33:49 crc kubenswrapper[4801]: I1124 21:33:49.767605 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:33:50 crc kubenswrapper[4801]: I1124 21:33:50.665831 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:33:50 crc kubenswrapper[4801]: E1124 21:33:50.672535 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:33:50 crc kubenswrapper[4801]: I1124 21:33:50.854323 4801 generic.go:334] "Generic (PLEG): container finished" podID="b2db7465-73a5-4b4c-97ac-416d05659962" containerID="6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687" exitCode=0 Nov 24 21:33:50 crc kubenswrapper[4801]: I1124 21:33:50.854398 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c45b8fb99-cff82" event={"ID":"b2db7465-73a5-4b4c-97ac-416d05659962","Type":"ContainerDied","Data":"6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687"} Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.026577 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.157474 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kzfj\" (UniqueName: \"kubernetes.io/projected/b2db7465-73a5-4b4c-97ac-416d05659962-kube-api-access-4kzfj\") pod \"b2db7465-73a5-4b4c-97ac-416d05659962\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.157762 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data-custom\") pod \"b2db7465-73a5-4b4c-97ac-416d05659962\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.158380 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data\") pod \"b2db7465-73a5-4b4c-97ac-416d05659962\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.158441 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-combined-ca-bundle\") pod \"b2db7465-73a5-4b4c-97ac-416d05659962\" (UID: \"b2db7465-73a5-4b4c-97ac-416d05659962\") " Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.165854 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b2db7465-73a5-4b4c-97ac-416d05659962" (UID: "b2db7465-73a5-4b4c-97ac-416d05659962"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.165872 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2db7465-73a5-4b4c-97ac-416d05659962-kube-api-access-4kzfj" (OuterVolumeSpecName: "kube-api-access-4kzfj") pod "b2db7465-73a5-4b4c-97ac-416d05659962" (UID: "b2db7465-73a5-4b4c-97ac-416d05659962"). InnerVolumeSpecName "kube-api-access-4kzfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.210046 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2db7465-73a5-4b4c-97ac-416d05659962" (UID: "b2db7465-73a5-4b4c-97ac-416d05659962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.259713 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data" (OuterVolumeSpecName: "config-data") pod "b2db7465-73a5-4b4c-97ac-416d05659962" (UID: "b2db7465-73a5-4b4c-97ac-416d05659962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.262175 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kzfj\" (UniqueName: \"kubernetes.io/projected/b2db7465-73a5-4b4c-97ac-416d05659962-kube-api-access-4kzfj\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.262205 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.262215 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.262224 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2db7465-73a5-4b4c-97ac-416d05659962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.895473 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c45b8fb99-cff82" event={"ID":"b2db7465-73a5-4b4c-97ac-416d05659962","Type":"ContainerDied","Data":"632bc7302d4053366f545b1cc0740a74112887b9dccc78bb5c119d82452acef1"} Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.895551 4801 scope.go:117] "RemoveContainer" containerID="6b6e2a534425819e1ba61056493a101b8ca6923b9c20768cdac2a3c55915f687" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.895730 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c45b8fb99-cff82" Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.904277 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8bbvs" event={"ID":"041806f3-13fb-43cc-98c2-180c16b5e3ea","Type":"ContainerStarted","Data":"82cfbdb1027b6b034bc5287e84f27a347e2bbb3596b2f70aea1740f4ca353283"} Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.936090 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-c45b8fb99-cff82"] Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.957008 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-c45b8fb99-cff82"] Nov 24 21:33:52 crc kubenswrapper[4801]: I1124 21:33:52.965014 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-8bbvs" podStartSLOduration=2.551342513 podStartE2EDuration="7.964984668s" podCreationTimestamp="2025-11-24 21:33:45 +0000 UTC" firstStartedPulling="2025-11-24 21:33:46.182648791 +0000 UTC m=+1598.265235451" lastFinishedPulling="2025-11-24 21:33:51.596290936 +0000 UTC m=+1603.678877606" observedRunningTime="2025-11-24 21:33:52.950093696 +0000 UTC m=+1605.032680366" watchObservedRunningTime="2025-11-24 21:33:52.964984668 +0000 UTC m=+1605.047571338" Nov 24 21:33:54 crc kubenswrapper[4801]: I1124 21:33:54.695237 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2db7465-73a5-4b4c-97ac-416d05659962" path="/var/lib/kubelet/pods/b2db7465-73a5-4b4c-97ac-416d05659962/volumes" Nov 24 21:33:54 crc kubenswrapper[4801]: I1124 21:33:54.948146 4801 generic.go:334] "Generic (PLEG): container finished" podID="041806f3-13fb-43cc-98c2-180c16b5e3ea" containerID="82cfbdb1027b6b034bc5287e84f27a347e2bbb3596b2f70aea1740f4ca353283" exitCode=0 Nov 24 21:33:54 crc kubenswrapper[4801]: I1124 21:33:54.948212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8bbvs" event={"ID":"041806f3-13fb-43cc-98c2-180c16b5e3ea","Type":"ContainerDied","Data":"82cfbdb1027b6b034bc5287e84f27a347e2bbb3596b2f70aea1740f4ca353283"} Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.562821 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.628169 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-config-data\") pod \"041806f3-13fb-43cc-98c2-180c16b5e3ea\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.628635 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tttt6\" (UniqueName: \"kubernetes.io/projected/041806f3-13fb-43cc-98c2-180c16b5e3ea-kube-api-access-tttt6\") pod \"041806f3-13fb-43cc-98c2-180c16b5e3ea\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.628754 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-scripts\") pod \"041806f3-13fb-43cc-98c2-180c16b5e3ea\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.628806 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-combined-ca-bundle\") pod \"041806f3-13fb-43cc-98c2-180c16b5e3ea\" (UID: \"041806f3-13fb-43cc-98c2-180c16b5e3ea\") " Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.637321 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041806f3-13fb-43cc-98c2-180c16b5e3ea-kube-api-access-tttt6" (OuterVolumeSpecName: "kube-api-access-tttt6") pod "041806f3-13fb-43cc-98c2-180c16b5e3ea" (UID: "041806f3-13fb-43cc-98c2-180c16b5e3ea"). InnerVolumeSpecName "kube-api-access-tttt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.640303 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-scripts" (OuterVolumeSpecName: "scripts") pod "041806f3-13fb-43cc-98c2-180c16b5e3ea" (UID: "041806f3-13fb-43cc-98c2-180c16b5e3ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.674413 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-config-data" (OuterVolumeSpecName: "config-data") pod "041806f3-13fb-43cc-98c2-180c16b5e3ea" (UID: "041806f3-13fb-43cc-98c2-180c16b5e3ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.703178 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "041806f3-13fb-43cc-98c2-180c16b5e3ea" (UID: "041806f3-13fb-43cc-98c2-180c16b5e3ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.732619 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tttt6\" (UniqueName: \"kubernetes.io/projected/041806f3-13fb-43cc-98c2-180c16b5e3ea-kube-api-access-tttt6\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.733094 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.733141 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.733156 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041806f3-13fb-43cc-98c2-180c16b5e3ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.975847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8bbvs" event={"ID":"041806f3-13fb-43cc-98c2-180c16b5e3ea","Type":"ContainerDied","Data":"1414e97b602b04129f19c8927dce11c7944c7454860c967cd3488f1c62da037a"} Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.975905 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1414e97b602b04129f19c8927dce11c7944c7454860c967cd3488f1c62da037a" Nov 24 21:33:56 crc kubenswrapper[4801]: I1124 21:33:56.975932 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8bbvs" Nov 24 21:33:57 crc kubenswrapper[4801]: I1124 21:33:57.994072 4801 generic.go:334] "Generic (PLEG): container finished" podID="be732949-920b-4e0c-ac7e-773a6983a64b" containerID="360388423add941aa8baddcb448584575b921f034592bf2fd048b5b4e98bf5c2" exitCode=0 Nov 24 21:33:57 crc kubenswrapper[4801]: I1124 21:33:57.994134 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" event={"ID":"be732949-920b-4e0c-ac7e-773a6983a64b","Type":"ContainerDied","Data":"360388423add941aa8baddcb448584575b921f034592bf2fd048b5b4e98bf5c2"} Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.179099 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.243724 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.602696 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.603509 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-api" containerID="cri-o://ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7" gracePeriod=30 Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.604232 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-listener" containerID="cri-o://79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4" gracePeriod=30 Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.604399 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-notifier" containerID="cri-o://96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1" gracePeriod=30 Nov 24 21:33:58 crc kubenswrapper[4801]: I1124 21:33:58.604539 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-evaluator" containerID="cri-o://74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e" gracePeriod=30 Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.010763 4801 generic.go:334] "Generic (PLEG): container finished" podID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerID="ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7" exitCode=0 Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.010861 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerDied","Data":"ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7"} Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.633471 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.746248 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-inventory\") pod \"be732949-920b-4e0c-ac7e-773a6983a64b\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.746699 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chxqx\" (UniqueName: \"kubernetes.io/projected/be732949-920b-4e0c-ac7e-773a6983a64b-kube-api-access-chxqx\") pod \"be732949-920b-4e0c-ac7e-773a6983a64b\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.746783 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-repo-setup-combined-ca-bundle\") pod \"be732949-920b-4e0c-ac7e-773a6983a64b\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.746835 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-ssh-key\") pod \"be732949-920b-4e0c-ac7e-773a6983a64b\" (UID: \"be732949-920b-4e0c-ac7e-773a6983a64b\") " Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.758244 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "be732949-920b-4e0c-ac7e-773a6983a64b" (UID: "be732949-920b-4e0c-ac7e-773a6983a64b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.767601 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be732949-920b-4e0c-ac7e-773a6983a64b-kube-api-access-chxqx" (OuterVolumeSpecName: "kube-api-access-chxqx") pod "be732949-920b-4e0c-ac7e-773a6983a64b" (UID: "be732949-920b-4e0c-ac7e-773a6983a64b"). InnerVolumeSpecName "kube-api-access-chxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.801063 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be732949-920b-4e0c-ac7e-773a6983a64b" (UID: "be732949-920b-4e0c-ac7e-773a6983a64b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.808459 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-inventory" (OuterVolumeSpecName: "inventory") pod "be732949-920b-4e0c-ac7e-773a6983a64b" (UID: "be732949-920b-4e0c-ac7e-773a6983a64b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.849659 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chxqx\" (UniqueName: \"kubernetes.io/projected/be732949-920b-4e0c-ac7e-773a6983a64b-kube-api-access-chxqx\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.849688 4801 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.849702 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:59 crc kubenswrapper[4801]: I1124 21:33:59.849714 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be732949-920b-4e0c-ac7e-773a6983a64b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.029644 4801 generic.go:334] "Generic (PLEG): container finished" podID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerID="96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1" exitCode=0 Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.029690 4801 generic.go:334] "Generic (PLEG): container finished" podID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerID="74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e" exitCode=0 Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.029712 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerDied","Data":"96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1"} Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.029798 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerDied","Data":"74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e"} Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.031771 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" event={"ID":"be732949-920b-4e0c-ac7e-773a6983a64b","Type":"ContainerDied","Data":"dad10e9dc56e34fe25e14cdbc4bde5f9f88a404453db75539ec2ddd3f44922ed"} Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.031818 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad10e9dc56e34fe25e14cdbc4bde5f9f88a404453db75539ec2ddd3f44922ed" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.031837 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.196353 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d"] Nov 24 21:34:00 crc kubenswrapper[4801]: E1124 21:34:00.197012 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be732949-920b-4e0c-ac7e-773a6983a64b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.197041 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="be732949-920b-4e0c-ac7e-773a6983a64b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 21:34:00 crc kubenswrapper[4801]: E1124 21:34:00.197072 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2db7465-73a5-4b4c-97ac-416d05659962" containerName="heat-engine" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.197083 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2db7465-73a5-4b4c-97ac-416d05659962" containerName="heat-engine" Nov 24 21:34:00 crc kubenswrapper[4801]: E1124 21:34:00.197109 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041806f3-13fb-43cc-98c2-180c16b5e3ea" containerName="aodh-db-sync" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.197116 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="041806f3-13fb-43cc-98c2-180c16b5e3ea" containerName="aodh-db-sync" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.197353 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="be732949-920b-4e0c-ac7e-773a6983a64b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.197411 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="041806f3-13fb-43cc-98c2-180c16b5e3ea" containerName="aodh-db-sync" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.197423 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2db7465-73a5-4b4c-97ac-416d05659962" containerName="heat-engine" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.198339 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.215908 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d"] Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.221104 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.221212 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.221386 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.221620 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.260595 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.261439 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.262002 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcg46\" (UniqueName: \"kubernetes.io/projected/806070c2-2599-47c1-9a86-bd078d5cc939-kube-api-access-rcg46\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.365484 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcg46\" (UniqueName: \"kubernetes.io/projected/806070c2-2599-47c1-9a86-bd078d5cc939-kube-api-access-rcg46\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.365595 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.365739 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.371890 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.372422 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.400148 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcg46\" (UniqueName: \"kubernetes.io/projected/806070c2-2599-47c1-9a86-bd078d5cc939-kube-api-access-rcg46\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fqw4d\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.522709 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.810628 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.884081 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-config-data\") pod \"88d4ec58-8523-407f-aa48-9f57aedbc143\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.884251 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68cft\" (UniqueName: \"kubernetes.io/projected/88d4ec58-8523-407f-aa48-9f57aedbc143-kube-api-access-68cft\") pod \"88d4ec58-8523-407f-aa48-9f57aedbc143\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.884349 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-scripts\") pod \"88d4ec58-8523-407f-aa48-9f57aedbc143\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.884523 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-combined-ca-bundle\") pod \"88d4ec58-8523-407f-aa48-9f57aedbc143\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.884620 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-internal-tls-certs\") pod \"88d4ec58-8523-407f-aa48-9f57aedbc143\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.884713 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-public-tls-certs\") pod \"88d4ec58-8523-407f-aa48-9f57aedbc143\" (UID: \"88d4ec58-8523-407f-aa48-9f57aedbc143\") " Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.892582 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d4ec58-8523-407f-aa48-9f57aedbc143-kube-api-access-68cft" (OuterVolumeSpecName: "kube-api-access-68cft") pod "88d4ec58-8523-407f-aa48-9f57aedbc143" (UID: "88d4ec58-8523-407f-aa48-9f57aedbc143"). InnerVolumeSpecName "kube-api-access-68cft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.892991 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-scripts" (OuterVolumeSpecName: "scripts") pod "88d4ec58-8523-407f-aa48-9f57aedbc143" (UID: "88d4ec58-8523-407f-aa48-9f57aedbc143"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.989393 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68cft\" (UniqueName: \"kubernetes.io/projected/88d4ec58-8523-407f-aa48-9f57aedbc143-kube-api-access-68cft\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:00 crc kubenswrapper[4801]: I1124 21:34:00.989434 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.048124 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "88d4ec58-8523-407f-aa48-9f57aedbc143" (UID: "88d4ec58-8523-407f-aa48-9f57aedbc143"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.070114 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "88d4ec58-8523-407f-aa48-9f57aedbc143" (UID: "88d4ec58-8523-407f-aa48-9f57aedbc143"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.071689 4801 generic.go:334] "Generic (PLEG): container finished" podID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerID="79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4" exitCode=0 Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.071744 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerDied","Data":"79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4"} Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.071782 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88d4ec58-8523-407f-aa48-9f57aedbc143","Type":"ContainerDied","Data":"61ebdb3fe94726d59f382ae0c048847313c78899d8b7a520d2235769233c3c00"} Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.071801 4801 scope.go:117] "RemoveContainer" containerID="79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.072042 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.095900 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.095930 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.109539 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88d4ec58-8523-407f-aa48-9f57aedbc143" (UID: "88d4ec58-8523-407f-aa48-9f57aedbc143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.151189 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-config-data" (OuterVolumeSpecName: "config-data") pod "88d4ec58-8523-407f-aa48-9f57aedbc143" (UID: "88d4ec58-8523-407f-aa48-9f57aedbc143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.173540 4801 scope.go:117] "RemoveContainer" containerID="96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.198521 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.198559 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88d4ec58-8523-407f-aa48-9f57aedbc143-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.204908 4801 scope.go:117] "RemoveContainer" containerID="74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.236680 4801 scope.go:117] "RemoveContainer" containerID="ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.243629 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d"] Nov 24 21:34:01 crc kubenswrapper[4801]: W1124 21:34:01.244154 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod806070c2_2599_47c1_9a86_bd078d5cc939.slice/crio-2ef16507095848a73423ca4dae85cd41e319e9eac6c1d924c918037ffd770bf4 WatchSource:0}: Error finding container 2ef16507095848a73423ca4dae85cd41e319e9eac6c1d924c918037ffd770bf4: Status 404 returned error can't find the container with id 2ef16507095848a73423ca4dae85cd41e319e9eac6c1d924c918037ffd770bf4 Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.276158 4801 scope.go:117] "RemoveContainer" containerID="79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.276910 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4\": container with ID starting with 79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4 not found: ID does not exist" containerID="79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.276983 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4"} err="failed to get container status \"79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4\": rpc error: code = NotFound desc = could not find container \"79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4\": container with ID starting with 79090775ff26260c57a44132768d9f11cbaee3c3508bc060b049c8bb153460a4 not found: ID does not exist" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.277022 4801 scope.go:117] "RemoveContainer" containerID="96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.277486 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1\": container with ID starting with 96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1 not found: ID does not exist" containerID="96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.277530 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1"} err="failed to get container status \"96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1\": rpc error: code = NotFound desc = could not find container \"96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1\": container with ID starting with 96afd914137b82186147ce57fea117b85966e929eb566ff4a062129b9acb35e1 not found: ID does not exist" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.277559 4801 scope.go:117] "RemoveContainer" containerID="74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.277867 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e\": container with ID starting with 74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e not found: ID does not exist" containerID="74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.277889 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e"} err="failed to get container status \"74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e\": rpc error: code = NotFound desc = could not find container \"74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e\": container with ID starting with 74dec2049636b548d71af9d41ff5bada0e4dac2000127c8a752f1a91722d238e not found: ID does not exist" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.277903 4801 scope.go:117] "RemoveContainer" containerID="ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.278255 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7\": container with ID starting with ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7 not found: ID does not exist" containerID="ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.278279 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7"} err="failed to get container status \"ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7\": rpc error: code = NotFound desc = could not find container \"ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7\": container with ID starting with ba8a91514d5e81388df93db3d50160bde2f3d970217436bc508bbb26f1de8ad7 not found: ID does not exist" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.447432 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.482645 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.510302 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.512021 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-evaluator" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512054 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-evaluator" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.512070 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-listener" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512079 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-listener" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.512112 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-api" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512119 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-api" Nov 24 21:34:01 crc kubenswrapper[4801]: E1124 21:34:01.512159 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-notifier" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512167 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-notifier" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512706 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-notifier" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512741 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-listener" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512780 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-api" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.512818 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" containerName="aodh-evaluator" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.519162 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.522804 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.523859 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.523862 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.524845 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5ndmv" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.525089 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.531009 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.613563 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-config-data\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.613818 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-internal-tls-certs\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.613895 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-public-tls-certs\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.613932 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-scripts\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.614292 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmtm\" (UniqueName: \"kubernetes.io/projected/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-kube-api-access-lrmtm\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.614450 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.717815 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-config-data\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.717982 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-internal-tls-certs\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.718024 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-public-tls-certs\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.718054 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-scripts\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.719550 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmtm\" (UniqueName: \"kubernetes.io/projected/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-kube-api-access-lrmtm\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.719745 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.724382 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-internal-tls-certs\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.724485 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-scripts\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.725037 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.725984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-public-tls-certs\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.726344 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-config-data\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.740558 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmtm\" (UniqueName: \"kubernetes.io/projected/0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b-kube-api-access-lrmtm\") pod \"aodh-0\" (UID: \"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b\") " pod="openstack/aodh-0" Nov 24 21:34:01 crc kubenswrapper[4801]: I1124 21:34:01.845644 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:34:02 crc kubenswrapper[4801]: I1124 21:34:02.088993 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" event={"ID":"806070c2-2599-47c1-9a86-bd078d5cc939","Type":"ContainerStarted","Data":"2ef16507095848a73423ca4dae85cd41e319e9eac6c1d924c918037ffd770bf4"} Nov 24 21:34:02 crc kubenswrapper[4801]: I1124 21:34:02.473407 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:34:02 crc kubenswrapper[4801]: I1124 21:34:02.682991 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d4ec58-8523-407f-aa48-9f57aedbc143" path="/var/lib/kubelet/pods/88d4ec58-8523-407f-aa48-9f57aedbc143/volumes" Nov 24 21:34:03 crc kubenswrapper[4801]: I1124 21:34:03.111349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" event={"ID":"806070c2-2599-47c1-9a86-bd078d5cc939","Type":"ContainerStarted","Data":"3678ee88209fbbfd9cd95ff05b02625750bf4d39c125496f244cc6801420098e"} Nov 24 21:34:03 crc kubenswrapper[4801]: I1124 21:34:03.121020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b","Type":"ContainerStarted","Data":"b0812f2e53396a2131f60a6f85514dc47f70b3aa26b61735ebac091e84c72a9f"} Nov 24 21:34:03 crc kubenswrapper[4801]: I1124 21:34:03.121089 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b","Type":"ContainerStarted","Data":"757e54837bbf11b57130ee810eab819d60db4923275f8d1e4d77fd80915dce47"} Nov 24 21:34:03 crc kubenswrapper[4801]: I1124 21:34:03.151059 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" podStartSLOduration=1.594100839 podStartE2EDuration="3.151036087s" podCreationTimestamp="2025-11-24 21:34:00 +0000 UTC" firstStartedPulling="2025-11-24 21:34:01.250774395 +0000 UTC m=+1613.333361065" lastFinishedPulling="2025-11-24 21:34:02.807709633 +0000 UTC m=+1614.890296313" observedRunningTime="2025-11-24 21:34:03.129917523 +0000 UTC m=+1615.212504193" watchObservedRunningTime="2025-11-24 21:34:03.151036087 +0000 UTC m=+1615.233622747" Nov 24 21:34:03 crc kubenswrapper[4801]: I1124 21:34:03.667530 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" containerID="cri-o://3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69" gracePeriod=604795 Nov 24 21:34:05 crc kubenswrapper[4801]: I1124 21:34:05.044961 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Nov 24 21:34:05 crc kubenswrapper[4801]: I1124 21:34:05.167178 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b","Type":"ContainerStarted","Data":"a8b31288eab3f9e296e8fb62d14137071b641757da00606106e80cc5eb69e9eb"} Nov 24 21:34:05 crc kubenswrapper[4801]: I1124 21:34:05.665498 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:34:05 crc kubenswrapper[4801]: E1124 21:34:05.666101 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:34:06 crc kubenswrapper[4801]: I1124 21:34:06.183551 4801 generic.go:334] "Generic (PLEG): container finished" podID="806070c2-2599-47c1-9a86-bd078d5cc939" containerID="3678ee88209fbbfd9cd95ff05b02625750bf4d39c125496f244cc6801420098e" exitCode=0 Nov 24 21:34:06 crc kubenswrapper[4801]: I1124 21:34:06.183642 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" event={"ID":"806070c2-2599-47c1-9a86-bd078d5cc939","Type":"ContainerDied","Data":"3678ee88209fbbfd9cd95ff05b02625750bf4d39c125496f244cc6801420098e"} Nov 24 21:34:06 crc kubenswrapper[4801]: I1124 21:34:06.194482 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b","Type":"ContainerStarted","Data":"12d5c3fa9c6334167a37038074429cbe751392bf9d76a61eff99a31564c487b0"} Nov 24 21:34:07 crc kubenswrapper[4801]: I1124 21:34:07.854387 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.018689 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcg46\" (UniqueName: \"kubernetes.io/projected/806070c2-2599-47c1-9a86-bd078d5cc939-kube-api-access-rcg46\") pod \"806070c2-2599-47c1-9a86-bd078d5cc939\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.019581 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-inventory\") pod \"806070c2-2599-47c1-9a86-bd078d5cc939\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.020631 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-ssh-key\") pod \"806070c2-2599-47c1-9a86-bd078d5cc939\" (UID: \"806070c2-2599-47c1-9a86-bd078d5cc939\") " Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.028555 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806070c2-2599-47c1-9a86-bd078d5cc939-kube-api-access-rcg46" (OuterVolumeSpecName: "kube-api-access-rcg46") pod "806070c2-2599-47c1-9a86-bd078d5cc939" (UID: "806070c2-2599-47c1-9a86-bd078d5cc939"). InnerVolumeSpecName "kube-api-access-rcg46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.079049 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "806070c2-2599-47c1-9a86-bd078d5cc939" (UID: "806070c2-2599-47c1-9a86-bd078d5cc939"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.085540 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-inventory" (OuterVolumeSpecName: "inventory") pod "806070c2-2599-47c1-9a86-bd078d5cc939" (UID: "806070c2-2599-47c1-9a86-bd078d5cc939"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.126314 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.126434 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcg46\" (UniqueName: \"kubernetes.io/projected/806070c2-2599-47c1-9a86-bd078d5cc939-kube-api-access-rcg46\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.126466 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/806070c2-2599-47c1-9a86-bd078d5cc939-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.241662 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" event={"ID":"806070c2-2599-47c1-9a86-bd078d5cc939","Type":"ContainerDied","Data":"2ef16507095848a73423ca4dae85cd41e319e9eac6c1d924c918037ffd770bf4"} Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.241993 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef16507095848a73423ca4dae85cd41e319e9eac6c1d924c918037ffd770bf4" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.247529 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fqw4d" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.255435 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b","Type":"ContainerStarted","Data":"06b2d7eb3e2235ce19774412ffe9992a1e7dc0aaeb75dffa225e7f7fa2e79f39"} Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.316982 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.65846921 podStartE2EDuration="7.316956326s" podCreationTimestamp="2025-11-24 21:34:01 +0000 UTC" firstStartedPulling="2025-11-24 21:34:02.468984022 +0000 UTC m=+1614.551570692" lastFinishedPulling="2025-11-24 21:34:07.127471138 +0000 UTC m=+1619.210057808" observedRunningTime="2025-11-24 21:34:08.289068582 +0000 UTC m=+1620.371655262" watchObservedRunningTime="2025-11-24 21:34:08.316956326 +0000 UTC m=+1620.399542996" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.341870 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk"] Nov 24 21:34:08 crc kubenswrapper[4801]: E1124 21:34:08.342664 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806070c2-2599-47c1-9a86-bd078d5cc939" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.342684 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="806070c2-2599-47c1-9a86-bd078d5cc939" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.342984 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="806070c2-2599-47c1-9a86-bd078d5cc939" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.344465 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.353884 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.354720 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.354887 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.355071 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.389734 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk"] Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.437680 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.438330 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.438476 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhg6\" (UniqueName: \"kubernetes.io/projected/c9a784dc-c9be-4100-8002-063d8a5b5985-kube-api-access-rlhg6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.438529 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.541981 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.542171 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.542261 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlhg6\" (UniqueName: \"kubernetes.io/projected/c9a784dc-c9be-4100-8002-063d8a5b5985-kube-api-access-rlhg6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.542309 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.546485 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.547093 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.551309 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.561694 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlhg6\" (UniqueName: \"kubernetes.io/projected/c9a784dc-c9be-4100-8002-063d8a5b5985-kube-api-access-rlhg6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.684525 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:34:08 crc kubenswrapper[4801]: I1124 21:34:08.690704 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:34:09 crc kubenswrapper[4801]: I1124 21:34:09.339491 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk"] Nov 24 21:34:09 crc kubenswrapper[4801]: W1124 21:34:09.339857 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a784dc_c9be_4100_8002_063d8a5b5985.slice/crio-2f09720b2b8d2d5006957dda560e84cdc9f82e74771bd8dfcb5271901dcf540b WatchSource:0}: Error finding container 2f09720b2b8d2d5006957dda560e84cdc9f82e74771bd8dfcb5271901dcf540b: Status 404 returned error can't find the container with id 2f09720b2b8d2d5006957dda560e84cdc9f82e74771bd8dfcb5271901dcf540b Nov 24 21:34:09 crc kubenswrapper[4801]: I1124 21:34:09.752279 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.209744 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.290963 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" event={"ID":"c9a784dc-c9be-4100-8002-063d8a5b5985","Type":"ContainerStarted","Data":"3aabf5bbd576fc56db822858539ca08b8c32f02120e97daa66e6b71d09951ccf"} Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.291038 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" event={"ID":"c9a784dc-c9be-4100-8002-063d8a5b5985","Type":"ContainerStarted","Data":"2f09720b2b8d2d5006957dda560e84cdc9f82e74771bd8dfcb5271901dcf540b"} Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.294912 4801 generic.go:334] "Generic (PLEG): container finished" podID="fb8472fa-9a35-4787-b38c-0c657881d910" containerID="3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69" exitCode=0 Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.294956 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"fb8472fa-9a35-4787-b38c-0c657881d910","Type":"ContainerDied","Data":"3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69"} Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.294983 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"fb8472fa-9a35-4787-b38c-0c657881d910","Type":"ContainerDied","Data":"150da886bc1eddc1921b1f00f6dce49d91efa12a34c22ffd3cb8409a921252e3"} Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.295001 4801 scope.go:117] "RemoveContainer" containerID="3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.295130 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303185 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-config-data\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303279 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-tls\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303345 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-confd\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303460 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303498 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-erlang-cookie\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303622 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-server-conf\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303640 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb8472fa-9a35-4787-b38c-0c657881d910-erlang-cookie-secret\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303670 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-plugins-conf\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303693 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-plugins\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303751 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnkpj\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-kube-api-access-dnkpj\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.303848 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb8472fa-9a35-4787-b38c-0c657881d910-pod-info\") pod \"fb8472fa-9a35-4787-b38c-0c657881d910\" (UID: \"fb8472fa-9a35-4787-b38c-0c657881d910\") " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.319482 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fb8472fa-9a35-4787-b38c-0c657881d910-pod-info" (OuterVolumeSpecName: "pod-info") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.334450 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.338187 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.338311 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.341496 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.351224 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" podStartSLOduration=1.952311657 podStartE2EDuration="2.351197505s" podCreationTimestamp="2025-11-24 21:34:08 +0000 UTC" firstStartedPulling="2025-11-24 21:34:09.350143489 +0000 UTC m=+1621.432730159" lastFinishedPulling="2025-11-24 21:34:09.749029337 +0000 UTC m=+1621.831616007" observedRunningTime="2025-11-24 21:34:10.317036207 +0000 UTC m=+1622.399622877" watchObservedRunningTime="2025-11-24 21:34:10.351197505 +0000 UTC m=+1622.433784175" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.355698 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-kube-api-access-dnkpj" (OuterVolumeSpecName: "kube-api-access-dnkpj") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "kube-api-access-dnkpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.358536 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8472fa-9a35-4787-b38c-0c657881d910-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.359598 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.383548 4801 scope.go:117] "RemoveContainer" containerID="d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410481 4801 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb8472fa-9a35-4787-b38c-0c657881d910-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410542 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410574 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410590 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410601 4801 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb8472fa-9a35-4787-b38c-0c657881d910-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410613 4801 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410625 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.410637 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnkpj\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-kube-api-access-dnkpj\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.433812 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-config-data" (OuterVolumeSpecName: "config-data") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.447516 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-server-conf" (OuterVolumeSpecName: "server-conf") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.476759 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.516595 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.516633 4801 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.516645 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8472fa-9a35-4787-b38c-0c657881d910-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.573487 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fb8472fa-9a35-4787-b38c-0c657881d910" (UID: "fb8472fa-9a35-4787-b38c-0c657881d910"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.620849 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb8472fa-9a35-4787-b38c-0c657881d910-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.677507 4801 scope.go:117] "RemoveContainer" containerID="3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69" Nov 24 21:34:10 crc kubenswrapper[4801]: E1124 21:34:10.678633 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69\": container with ID starting with 3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69 not found: ID does not exist" containerID="3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.678677 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69"} err="failed to get container status \"3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69\": rpc error: code = NotFound desc = could not find container \"3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69\": container with ID starting with 3ef62b3de3b229ffb81a321985ab56625b9ac058a96892cecdf0f2938f8c4b69 not found: ID does not exist" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.678707 4801 scope.go:117] "RemoveContainer" containerID="d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18" Nov 24 21:34:10 crc kubenswrapper[4801]: E1124 21:34:10.678982 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18\": container with ID starting with d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18 not found: ID does not exist" containerID="d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.679004 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18"} err="failed to get container status \"d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18\": rpc error: code = NotFound desc = could not find container \"d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18\": container with ID starting with d2a5bb93a298ad1c14900cd93a4db8939c14ab2140a4cbecb3a68e1e837fdd18 not found: ID does not exist" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.738540 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.738592 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.741796 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:34:10 crc kubenswrapper[4801]: E1124 21:34:10.742492 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="setup-container" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.742512 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="setup-container" Nov 24 21:34:10 crc kubenswrapper[4801]: E1124 21:34:10.742547 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.742554 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.742862 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" containerName="rabbitmq" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.745051 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.761715 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.933607 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934180 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934290 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934510 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934564 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxf2t\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-kube-api-access-hxf2t\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934689 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934748 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934868 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.934905 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.935217 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:10 crc kubenswrapper[4801]: I1124 21:34:10.935295 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.038231 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.038793 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.038799 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.038359 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041586 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041658 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041727 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxf2t\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-kube-api-access-hxf2t\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041807 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041838 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041879 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.041914 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.042063 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.042139 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.043014 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.043019 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.043638 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.044129 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.047703 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.048519 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.050050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.054168 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.062250 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxf2t\" (UniqueName: \"kubernetes.io/projected/6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb-kube-api-access-hxf2t\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.191149 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-1\" (UID: \"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb\") " pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.384512 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Nov 24 21:34:11 crc kubenswrapper[4801]: I1124 21:34:11.928346 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Nov 24 21:34:11 crc kubenswrapper[4801]: W1124 21:34:11.943050 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb98ae2_f7aa_4f46_86d0_9134dfd9c5cb.slice/crio-3891f39b1f6af01aa2b455b7d527d275610082b42a3609c03eed203b719e0f1f WatchSource:0}: Error finding container 3891f39b1f6af01aa2b455b7d527d275610082b42a3609c03eed203b719e0f1f: Status 404 returned error can't find the container with id 3891f39b1f6af01aa2b455b7d527d275610082b42a3609c03eed203b719e0f1f Nov 24 21:34:12 crc kubenswrapper[4801]: I1124 21:34:12.332900 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb","Type":"ContainerStarted","Data":"3891f39b1f6af01aa2b455b7d527d275610082b42a3609c03eed203b719e0f1f"} Nov 24 21:34:12 crc kubenswrapper[4801]: I1124 21:34:12.685237 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8472fa-9a35-4787-b38c-0c657881d910" path="/var/lib/kubelet/pods/fb8472fa-9a35-4787-b38c-0c657881d910/volumes" Nov 24 21:34:15 crc kubenswrapper[4801]: I1124 21:34:15.390020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb","Type":"ContainerStarted","Data":"74814fc204a407b424ec1e7253284562937f11b0c9ec07f6bae737112121876a"} Nov 24 21:34:17 crc kubenswrapper[4801]: I1124 21:34:17.664940 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:34:17 crc kubenswrapper[4801]: E1124 21:34:17.666269 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:34:32 crc kubenswrapper[4801]: I1124 21:34:32.693101 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:34:32 crc kubenswrapper[4801]: E1124 21:34:32.702386 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:34:44 crc kubenswrapper[4801]: I1124 21:34:44.076050 4801 scope.go:117] "RemoveContainer" containerID="1d8d94a461742ebbc55f22e1f10cf2f69c821a5e0e3fe9446d05a4f18d8d494c" Nov 24 21:34:44 crc kubenswrapper[4801]: I1124 21:34:44.167979 4801 scope.go:117] "RemoveContainer" containerID="51897b962c56e28aed948da93494e0ae0aad5c8b6f44ae4e813d7caa6293e935" Nov 24 21:34:44 crc kubenswrapper[4801]: I1124 21:34:44.225595 4801 scope.go:117] "RemoveContainer" containerID="b2f03bb73d4dfcb3ce5f4873cee08048747d7e9938e2a2f9062640084fb41d71" Nov 24 21:34:44 crc kubenswrapper[4801]: I1124 21:34:44.267893 4801 scope.go:117] "RemoveContainer" containerID="1e34f51e8f2b9e9b7d4073db7fee2c72004217cb791ecf489730c3da31b225a2" Nov 24 21:34:44 crc kubenswrapper[4801]: I1124 21:34:44.664492 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:34:44 crc kubenswrapper[4801]: E1124 21:34:44.665034 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:34:47 crc kubenswrapper[4801]: I1124 21:34:47.988267 4801 generic.go:334] "Generic (PLEG): container finished" podID="6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb" containerID="74814fc204a407b424ec1e7253284562937f11b0c9ec07f6bae737112121876a" exitCode=0 Nov 24 21:34:47 crc kubenswrapper[4801]: I1124 21:34:47.988361 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb","Type":"ContainerDied","Data":"74814fc204a407b424ec1e7253284562937f11b0c9ec07f6bae737112121876a"} Nov 24 21:34:49 crc kubenswrapper[4801]: I1124 21:34:49.005737 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb","Type":"ContainerStarted","Data":"3e052986d2a0b8f476f79f8eba24c2e0ce2e84e9b23daa96643d0aa47ade27af"} Nov 24 21:34:49 crc kubenswrapper[4801]: I1124 21:34:49.011477 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Nov 24 21:34:56 crc kubenswrapper[4801]: I1124 21:34:56.664865 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:34:56 crc kubenswrapper[4801]: E1124 21:34:56.665814 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:35:01 crc kubenswrapper[4801]: I1124 21:35:01.389662 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Nov 24 21:35:01 crc kubenswrapper[4801]: I1124 21:35:01.419440 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=51.419416415 podStartE2EDuration="51.419416415s" podCreationTimestamp="2025-11-24 21:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:34:49.040258564 +0000 UTC m=+1661.122845244" watchObservedRunningTime="2025-11-24 21:35:01.419416415 +0000 UTC m=+1673.502003085" Nov 24 21:35:01 crc kubenswrapper[4801]: I1124 21:35:01.461664 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:35:06 crc kubenswrapper[4801]: I1124 21:35:06.982928 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="rabbitmq" containerID="cri-o://5dd98a3beb0b5724528590525fc724cb23a883a2bb2572cc1b74d820f698143d" gracePeriod=604795 Nov 24 21:35:07 crc kubenswrapper[4801]: I1124 21:35:07.665403 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:35:07 crc kubenswrapper[4801]: E1124 21:35:07.666316 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:35:13 crc kubenswrapper[4801]: I1124 21:35:13.460542 4801 generic.go:334] "Generic (PLEG): container finished" podID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerID="5dd98a3beb0b5724528590525fc724cb23a883a2bb2572cc1b74d820f698143d" exitCode=0 Nov 24 21:35:13 crc kubenswrapper[4801]: I1124 21:35:13.460642 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56f017-0f5c-4eb2-b3be-44db75365483","Type":"ContainerDied","Data":"5dd98a3beb0b5724528590525fc724cb23a883a2bb2572cc1b74d820f698143d"} Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.020694 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190127 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-server-conf\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190254 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-erlang-cookie\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190313 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-plugins-conf\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190488 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-tls\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190527 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56f017-0f5c-4eb2-b3be-44db75365483-erlang-cookie-secret\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190587 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56f017-0f5c-4eb2-b3be-44db75365483-pod-info\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190692 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn8j5\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-kube-api-access-dn8j5\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190750 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-plugins\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190832 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-config-data\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190922 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-confd\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.190992 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"2f56f017-0f5c-4eb2-b3be-44db75365483\" (UID: \"2f56f017-0f5c-4eb2-b3be-44db75365483\") " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.191643 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.191967 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.194735 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.194766 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.197214 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.200833 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.204510 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f56f017-0f5c-4eb2-b3be-44db75365483-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.220458 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2f56f017-0f5c-4eb2-b3be-44db75365483-pod-info" (OuterVolumeSpecName: "pod-info") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.226025 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-kube-api-access-dn8j5" (OuterVolumeSpecName: "kube-api-access-dn8j5") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "kube-api-access-dn8j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.227580 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.278969 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-config-data" (OuterVolumeSpecName: "config-data") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.298601 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.298634 4801 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.298648 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.298658 4801 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56f017-0f5c-4eb2-b3be-44db75365483-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.298670 4801 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56f017-0f5c-4eb2-b3be-44db75365483-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.298680 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn8j5\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-kube-api-access-dn8j5\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.300208 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.316502 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-server-conf" (OuterVolumeSpecName: "server-conf") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.339941 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.403591 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.403638 4801 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56f017-0f5c-4eb2-b3be-44db75365483-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.424478 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2f56f017-0f5c-4eb2-b3be-44db75365483" (UID: "2f56f017-0f5c-4eb2-b3be-44db75365483"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.478960 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56f017-0f5c-4eb2-b3be-44db75365483","Type":"ContainerDied","Data":"443604bfe22f48240d3d77b03fcb5449fb5bc0851124b26d31f57e9caddf6c4a"} Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.479035 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.479045 4801 scope.go:117] "RemoveContainer" containerID="5dd98a3beb0b5724528590525fc724cb23a883a2bb2572cc1b74d820f698143d" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.507294 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56f017-0f5c-4eb2-b3be-44db75365483-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.567329 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.587346 4801 scope.go:117] "RemoveContainer" containerID="a857a39ae5ed62c45ea72764d9ae7bed043f8de581aa188be2bce5c73eed473f" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.593411 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.626831 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:35:14 crc kubenswrapper[4801]: E1124 21:35:14.627617 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="rabbitmq" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.627640 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="rabbitmq" Nov 24 21:35:14 crc kubenswrapper[4801]: E1124 21:35:14.627688 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="setup-container" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.627696 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="setup-container" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.628000 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" containerName="rabbitmq" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.629771 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.644894 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.689573 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f56f017-0f5c-4eb2-b3be-44db75365483" path="/var/lib/kubelet/pods/2f56f017-0f5c-4eb2-b3be-44db75365483/volumes" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.825397 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-config-data\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.825509 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e50000d1-be7d-4dea-86dc-d616dda527b7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.825577 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.825617 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.825638 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.825952 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e50000d1-be7d-4dea-86dc-d616dda527b7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.826015 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.826040 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.826190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.826213 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vz5n\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-kube-api-access-2vz5n\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.826249 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.929744 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.929824 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.929948 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e50000d1-be7d-4dea-86dc-d616dda527b7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.929988 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930020 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930102 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930136 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vz5n\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-kube-api-access-2vz5n\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930175 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930387 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-config-data\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930473 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e50000d1-be7d-4dea-86dc-d616dda527b7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930531 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.930810 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.932375 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-config-data\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.932497 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.932823 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.932887 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e50000d1-be7d-4dea-86dc-d616dda527b7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.933401 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.937723 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.938104 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.939139 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e50000d1-be7d-4dea-86dc-d616dda527b7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.950146 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e50000d1-be7d-4dea-86dc-d616dda527b7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.959958 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vz5n\" (UniqueName: \"kubernetes.io/projected/e50000d1-be7d-4dea-86dc-d616dda527b7-kube-api-access-2vz5n\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:14 crc kubenswrapper[4801]: I1124 21:35:14.979665 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e50000d1-be7d-4dea-86dc-d616dda527b7\") " pod="openstack/rabbitmq-server-0" Nov 24 21:35:15 crc kubenswrapper[4801]: I1124 21:35:15.016806 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:35:15 crc kubenswrapper[4801]: I1124 21:35:15.615079 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:35:16 crc kubenswrapper[4801]: I1124 21:35:16.509439 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e50000d1-be7d-4dea-86dc-d616dda527b7","Type":"ContainerStarted","Data":"aa1d1d8da35e8710c607d89441e0157d5e167020edabacbe6fa5324b639db57d"} Nov 24 21:35:18 crc kubenswrapper[4801]: I1124 21:35:18.542530 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e50000d1-be7d-4dea-86dc-d616dda527b7","Type":"ContainerStarted","Data":"76e3b46377dbc925b7ad09f8c6b01bc8a7bcea481111d497e738ee5a6b7b6ae7"} Nov 24 21:35:20 crc kubenswrapper[4801]: I1124 21:35:20.674740 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:35:20 crc kubenswrapper[4801]: E1124 21:35:20.675824 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:35:33 crc kubenswrapper[4801]: I1124 21:35:33.664706 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:35:33 crc kubenswrapper[4801]: E1124 21:35:33.665846 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:35:44 crc kubenswrapper[4801]: I1124 21:35:44.519852 4801 scope.go:117] "RemoveContainer" containerID="4653d15376ee33d5198432b9f042b1e764ab96a4dba0a739cd8437d1e1cb2225" Nov 24 21:35:44 crc kubenswrapper[4801]: I1124 21:35:44.558347 4801 scope.go:117] "RemoveContainer" containerID="c6266119f517e8d620284b8218d842f1eab44bc9debb2c6bb61b91e88afc9fb2" Nov 24 21:35:47 crc kubenswrapper[4801]: I1124 21:35:47.664707 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:35:47 crc kubenswrapper[4801]: E1124 21:35:47.665747 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:35:51 crc kubenswrapper[4801]: I1124 21:35:51.030491 4801 generic.go:334] "Generic (PLEG): container finished" podID="e50000d1-be7d-4dea-86dc-d616dda527b7" containerID="76e3b46377dbc925b7ad09f8c6b01bc8a7bcea481111d497e738ee5a6b7b6ae7" exitCode=0 Nov 24 21:35:51 crc kubenswrapper[4801]: I1124 21:35:51.030657 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e50000d1-be7d-4dea-86dc-d616dda527b7","Type":"ContainerDied","Data":"76e3b46377dbc925b7ad09f8c6b01bc8a7bcea481111d497e738ee5a6b7b6ae7"} Nov 24 21:35:52 crc kubenswrapper[4801]: I1124 21:35:52.046221 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e50000d1-be7d-4dea-86dc-d616dda527b7","Type":"ContainerStarted","Data":"f7c91317050f45687f1ed0799bd01e904910afed5f5cb0cbafd754a0c7a5ef53"} Nov 24 21:35:52 crc kubenswrapper[4801]: I1124 21:35:52.047047 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 21:35:52 crc kubenswrapper[4801]: I1124 21:35:52.086349 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.086325131 podStartE2EDuration="38.086325131s" podCreationTimestamp="2025-11-24 21:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:35:52.076506866 +0000 UTC m=+1724.159093536" watchObservedRunningTime="2025-11-24 21:35:52.086325131 +0000 UTC m=+1724.168911801" Nov 24 21:35:58 crc kubenswrapper[4801]: I1124 21:35:58.686083 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:35:58 crc kubenswrapper[4801]: E1124 21:35:58.687144 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:36:05 crc kubenswrapper[4801]: I1124 21:36:05.020577 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 21:36:13 crc kubenswrapper[4801]: I1124 21:36:13.665358 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:36:13 crc kubenswrapper[4801]: E1124 21:36:13.666952 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:36:27 crc kubenswrapper[4801]: I1124 21:36:27.664302 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:36:27 crc kubenswrapper[4801]: E1124 21:36:27.665798 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:36:42 crc kubenswrapper[4801]: I1124 21:36:42.666333 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:36:42 crc kubenswrapper[4801]: E1124 21:36:42.667817 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:36:54 crc kubenswrapper[4801]: I1124 21:36:54.667098 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:36:54 crc kubenswrapper[4801]: E1124 21:36:54.668209 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:37:03 crc kubenswrapper[4801]: I1124 21:37:03.080590 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e647-account-create-jzbwv"] Nov 24 21:37:03 crc kubenswrapper[4801]: I1124 21:37:03.095885 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4z6pq"] Nov 24 21:37:03 crc kubenswrapper[4801]: I1124 21:37:03.108055 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e647-account-create-jzbwv"] Nov 24 21:37:03 crc kubenswrapper[4801]: I1124 21:37:03.121610 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4z6pq"] Nov 24 21:37:04 crc kubenswrapper[4801]: I1124 21:37:04.682760 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4" path="/var/lib/kubelet/pods/49dd71c9-f4f7-42a5-97f4-1ba4c8b4ccd4/volumes" Nov 24 21:37:04 crc kubenswrapper[4801]: I1124 21:37:04.687591 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac13fbea-bf18-449a-aa48-65aefa77699d" path="/var/lib/kubelet/pods/ac13fbea-bf18-449a-aa48-65aefa77699d/volumes" Nov 24 21:37:06 crc kubenswrapper[4801]: I1124 21:37:06.057627 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2qsxb"] Nov 24 21:37:06 crc kubenswrapper[4801]: I1124 21:37:06.069876 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2qsxb"] Nov 24 21:37:06 crc kubenswrapper[4801]: I1124 21:37:06.682812 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bed4fea-aa42-417a-a051-c42b55b21835" path="/var/lib/kubelet/pods/0bed4fea-aa42-417a-a051-c42b55b21835/volumes" Nov 24 21:37:09 crc kubenswrapper[4801]: I1124 21:37:09.664928 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:37:09 crc kubenswrapper[4801]: E1124 21:37:09.666614 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.059801 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3ad7-account-create-jw6sj"] Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.080280 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pxnc7"] Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.096811 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7782-account-create-5lrd6"] Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.109915 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3ad7-account-create-jw6sj"] Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.121640 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7782-account-create-5lrd6"] Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.135243 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pxnc7"] Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.703159 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8187888b-0911-4cd9-be77-4c373789c09a" path="/var/lib/kubelet/pods/8187888b-0911-4cd9-be77-4c373789c09a/volumes" Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.707132 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09d1697-e51b-43fa-95d5-6194022a7206" path="/var/lib/kubelet/pods/a09d1697-e51b-43fa-95d5-6194022a7206/volumes" Nov 24 21:37:10 crc kubenswrapper[4801]: I1124 21:37:10.709895 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a807d6ed-6672-4b56-b392-f20eadaaf913" path="/var/lib/kubelet/pods/a807d6ed-6672-4b56-b392-f20eadaaf913/volumes" Nov 24 21:37:13 crc kubenswrapper[4801]: I1124 21:37:13.051925 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-4b53-account-create-ggbmv"] Nov 24 21:37:13 crc kubenswrapper[4801]: I1124 21:37:13.073449 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cflfb"] Nov 24 21:37:13 crc kubenswrapper[4801]: I1124 21:37:13.084501 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cflfb"] Nov 24 21:37:13 crc kubenswrapper[4801]: I1124 21:37:13.098128 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-4b53-account-create-ggbmv"] Nov 24 21:37:14 crc kubenswrapper[4801]: I1124 21:37:14.693129 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44d60b6-4a0a-47dd-bc64-350bd34f5a4d" path="/var/lib/kubelet/pods/c44d60b6-4a0a-47dd-bc64-350bd34f5a4d/volumes" Nov 24 21:37:14 crc kubenswrapper[4801]: I1124 21:37:14.696919 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7df3222-fe4f-4726-beac-4ba9e699368b" path="/var/lib/kubelet/pods/d7df3222-fe4f-4726-beac-4ba9e699368b/volumes" Nov 24 21:37:21 crc kubenswrapper[4801]: I1124 21:37:21.664812 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:37:21 crc kubenswrapper[4801]: E1124 21:37:21.666255 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:37:24 crc kubenswrapper[4801]: I1124 21:37:24.553141 4801 generic.go:334] "Generic (PLEG): container finished" podID="c9a784dc-c9be-4100-8002-063d8a5b5985" containerID="3aabf5bbd576fc56db822858539ca08b8c32f02120e97daa66e6b71d09951ccf" exitCode=0 Nov 24 21:37:24 crc kubenswrapper[4801]: I1124 21:37:24.553269 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" event={"ID":"c9a784dc-c9be-4100-8002-063d8a5b5985","Type":"ContainerDied","Data":"3aabf5bbd576fc56db822858539ca08b8c32f02120e97daa66e6b71d09951ccf"} Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.137805 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.200419 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-ssh-key\") pod \"c9a784dc-c9be-4100-8002-063d8a5b5985\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.200561 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-bootstrap-combined-ca-bundle\") pod \"c9a784dc-c9be-4100-8002-063d8a5b5985\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.200852 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-inventory\") pod \"c9a784dc-c9be-4100-8002-063d8a5b5985\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.200971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlhg6\" (UniqueName: \"kubernetes.io/projected/c9a784dc-c9be-4100-8002-063d8a5b5985-kube-api-access-rlhg6\") pod \"c9a784dc-c9be-4100-8002-063d8a5b5985\" (UID: \"c9a784dc-c9be-4100-8002-063d8a5b5985\") " Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.209273 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a784dc-c9be-4100-8002-063d8a5b5985-kube-api-access-rlhg6" (OuterVolumeSpecName: "kube-api-access-rlhg6") pod "c9a784dc-c9be-4100-8002-063d8a5b5985" (UID: "c9a784dc-c9be-4100-8002-063d8a5b5985"). InnerVolumeSpecName "kube-api-access-rlhg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.210054 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c9a784dc-c9be-4100-8002-063d8a5b5985" (UID: "c9a784dc-c9be-4100-8002-063d8a5b5985"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.245776 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-inventory" (OuterVolumeSpecName: "inventory") pod "c9a784dc-c9be-4100-8002-063d8a5b5985" (UID: "c9a784dc-c9be-4100-8002-063d8a5b5985"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.246696 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c9a784dc-c9be-4100-8002-063d8a5b5985" (UID: "c9a784dc-c9be-4100-8002-063d8a5b5985"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.304668 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.305289 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlhg6\" (UniqueName: \"kubernetes.io/projected/c9a784dc-c9be-4100-8002-063d8a5b5985-kube-api-access-rlhg6\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.305305 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.305318 4801 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a784dc-c9be-4100-8002-063d8a5b5985-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.602932 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" event={"ID":"c9a784dc-c9be-4100-8002-063d8a5b5985","Type":"ContainerDied","Data":"2f09720b2b8d2d5006957dda560e84cdc9f82e74771bd8dfcb5271901dcf540b"} Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.603011 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f09720b2b8d2d5006957dda560e84cdc9f82e74771bd8dfcb5271901dcf540b" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.603096 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.706406 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m"] Nov 24 21:37:26 crc kubenswrapper[4801]: E1124 21:37:26.707216 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a784dc-c9be-4100-8002-063d8a5b5985" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.707242 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a784dc-c9be-4100-8002-063d8a5b5985" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.707665 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a784dc-c9be-4100-8002-063d8a5b5985" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.709066 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.711709 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.712125 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.712423 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.715006 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.731433 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m"] Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.819585 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.819645 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.819671 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzbr\" (UniqueName: \"kubernetes.io/projected/bbed0308-b82c-4512-8391-a46502977b63-kube-api-access-stzbr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.922237 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.922286 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.922314 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzbr\" (UniqueName: \"kubernetes.io/projected/bbed0308-b82c-4512-8391-a46502977b63-kube-api-access-stzbr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.931065 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.939173 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:26 crc kubenswrapper[4801]: I1124 21:37:26.947068 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzbr\" (UniqueName: \"kubernetes.io/projected/bbed0308-b82c-4512-8391-a46502977b63-kube-api-access-stzbr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tv56m\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:27 crc kubenswrapper[4801]: I1124 21:37:27.052025 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:37:27 crc kubenswrapper[4801]: I1124 21:37:27.768995 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m"] Nov 24 21:37:27 crc kubenswrapper[4801]: I1124 21:37:27.770512 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:37:28 crc kubenswrapper[4801]: I1124 21:37:28.638925 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" event={"ID":"bbed0308-b82c-4512-8391-a46502977b63","Type":"ContainerStarted","Data":"3aab44c8ea735d9762b708361f0123b42243ebead8d26ddfdc6a4a8c4e52c6e4"} Nov 24 21:37:29 crc kubenswrapper[4801]: I1124 21:37:29.657437 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" event={"ID":"bbed0308-b82c-4512-8391-a46502977b63","Type":"ContainerStarted","Data":"93feab4ed83a361bd569ebd3f2652ef02a7d31f51a5ec05373a129b3a8a75827"} Nov 24 21:37:31 crc kubenswrapper[4801]: I1124 21:37:31.050233 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" podStartSLOduration=4.50653408 podStartE2EDuration="5.050204726s" podCreationTimestamp="2025-11-24 21:37:26 +0000 UTC" firstStartedPulling="2025-11-24 21:37:27.770260516 +0000 UTC m=+1819.852847186" lastFinishedPulling="2025-11-24 21:37:28.313931152 +0000 UTC m=+1820.396517832" observedRunningTime="2025-11-24 21:37:29.68759902 +0000 UTC m=+1821.770185690" watchObservedRunningTime="2025-11-24 21:37:31.050204726 +0000 UTC m=+1823.132791396" Nov 24 21:37:31 crc kubenswrapper[4801]: I1124 21:37:31.058809 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0fd0-account-create-jpd6q"] Nov 24 21:37:31 crc kubenswrapper[4801]: I1124 21:37:31.087314 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5"] Nov 24 21:37:31 crc kubenswrapper[4801]: I1124 21:37:31.104732 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xsdv5"] Nov 24 21:37:31 crc kubenswrapper[4801]: I1124 21:37:31.121452 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0fd0-account-create-jpd6q"] Nov 24 21:37:32 crc kubenswrapper[4801]: I1124 21:37:32.691778 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339630dc-e6c1-4471-bb45-a0985b4097ba" path="/var/lib/kubelet/pods/339630dc-e6c1-4471-bb45-a0985b4097ba/volumes" Nov 24 21:37:32 crc kubenswrapper[4801]: I1124 21:37:32.694320 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535abb13-1113-478c-af82-661e1a06f21e" path="/var/lib/kubelet/pods/535abb13-1113-478c-af82-661e1a06f21e/volumes" Nov 24 21:37:35 crc kubenswrapper[4801]: I1124 21:37:35.664566 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:37:36 crc kubenswrapper[4801]: I1124 21:37:36.780418 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"e95851a7e4adfaa39a5f53fbd943a83874eab98df94481a53956c3ef5883efd6"} Nov 24 21:37:44 crc kubenswrapper[4801]: I1124 21:37:44.745320 4801 scope.go:117] "RemoveContainer" containerID="abb9ba962e41dcb3ebd31ee64b19302aa98a627cc182138fee6759bab49db55a" Nov 24 21:37:44 crc kubenswrapper[4801]: I1124 21:37:44.811440 4801 scope.go:117] "RemoveContainer" containerID="014caa51b3ffe3a7daf063907eb8bc9137ccb49012ecf98543cb3c5bca92b96c" Nov 24 21:37:44 crc kubenswrapper[4801]: I1124 21:37:44.866742 4801 scope.go:117] "RemoveContainer" containerID="3f97b4efe5a8f1855d96953c7fe2e5641a7bdc0db247a241073e48797642bdd5" Nov 24 21:37:44 crc kubenswrapper[4801]: I1124 21:37:44.937768 4801 scope.go:117] "RemoveContainer" containerID="21810e02d126300bcd4cbf37bb31dcd054ee9ad4165ba74a5f59abdcaed6f782" Nov 24 21:37:44 crc kubenswrapper[4801]: I1124 21:37:44.989114 4801 scope.go:117] "RemoveContainer" containerID="d6124ae5b783ea07f780ddf37adc7bdf3fcc82ec43aa13d7b459e92276827f8f" Nov 24 21:37:45 crc kubenswrapper[4801]: I1124 21:37:45.062144 4801 scope.go:117] "RemoveContainer" containerID="f215d627d7e6db0cc079f215d69d87bbb55ff7021310e31bd20016ab8ba7afa3" Nov 24 21:37:45 crc kubenswrapper[4801]: I1124 21:37:45.119762 4801 scope.go:117] "RemoveContainer" containerID="44f09d6a53a36e24fcb7eeacafa0ccbed2f5bddd32222f5bdb242ca91f22cf99" Nov 24 21:37:45 crc kubenswrapper[4801]: I1124 21:37:45.164508 4801 scope.go:117] "RemoveContainer" containerID="df99caeb91e68d689662be32507075ef2e159e79137b32be319b65e8c37034a6" Nov 24 21:37:45 crc kubenswrapper[4801]: I1124 21:37:45.200111 4801 scope.go:117] "RemoveContainer" containerID="511bb2dc02b4db9b12da879be614351ab65fa89176cee1f74148c9d7f3a4601e" Nov 24 21:37:45 crc kubenswrapper[4801]: I1124 21:37:45.230966 4801 scope.go:117] "RemoveContainer" containerID="a2a78734a3e7c4dc8d8b5f6acff32b2f985aace6fc6f2f9874b5111e74c4e4b8" Nov 24 21:37:45 crc kubenswrapper[4801]: I1124 21:37:45.264233 4801 scope.go:117] "RemoveContainer" containerID="039125d23328e6804789d952815adf77ac7263c9c48905f3e16350d1b320c44a" Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.069508 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e180-account-create-xzfhs"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.090530 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e180-account-create-xzfhs"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.103258 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cbqw2"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.116389 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8k6hl"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.127778 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-b44fx"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.158648 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-b44fx"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.172610 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8k6hl"] Nov 24 21:37:53 crc kubenswrapper[4801]: I1124 21:37:53.188259 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cbqw2"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.039878 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e08c-account-create-vct62"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.056604 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-08fa-account-create-lpjbf"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.073729 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pkrzc"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.084790 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e08c-account-create-vct62"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.097706 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pkrzc"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.106981 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-08fa-account-create-lpjbf"] Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.681702 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ffebca-fff7-4df2-8db3-b302a7fcd3e1" path="/var/lib/kubelet/pods/00ffebca-fff7-4df2-8db3-b302a7fcd3e1/volumes" Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.685441 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06671e3a-592e-476d-bc4b-74646cfc034b" path="/var/lib/kubelet/pods/06671e3a-592e-476d-bc4b-74646cfc034b/volumes" Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.688876 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31" path="/var/lib/kubelet/pods/2e7c59f9-8abf-4ef8-9829-c5fa2f07ee31/volumes" Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.692111 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d62c145-e144-4f0a-8dcb-73d67b536463" path="/var/lib/kubelet/pods/6d62c145-e144-4f0a-8dcb-73d67b536463/volumes" Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.695402 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88357b0-8aad-4e9a-981f-d520525938a1" path="/var/lib/kubelet/pods/d88357b0-8aad-4e9a-981f-d520525938a1/volumes" Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.698962 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e758b4dd-3072-46bb-9e96-c64dc748c8e4" path="/var/lib/kubelet/pods/e758b4dd-3072-46bb-9e96-c64dc748c8e4/volumes" Nov 24 21:37:54 crc kubenswrapper[4801]: I1124 21:37:54.700988 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0" path="/var/lib/kubelet/pods/eac571e4-fd83-4c7e-a7ef-c3ed7fdf0cc0/volumes" Nov 24 21:37:57 crc kubenswrapper[4801]: I1124 21:37:57.035689 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0be8-account-create-r6lvl"] Nov 24 21:37:57 crc kubenswrapper[4801]: I1124 21:37:57.053990 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0be8-account-create-r6lvl"] Nov 24 21:37:58 crc kubenswrapper[4801]: I1124 21:37:58.043455 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-m4tng"] Nov 24 21:37:58 crc kubenswrapper[4801]: I1124 21:37:58.056178 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-m4tng"] Nov 24 21:37:58 crc kubenswrapper[4801]: I1124 21:37:58.701768 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87237f3a-08f9-46a9-930e-a3ca7bf1cd9f" path="/var/lib/kubelet/pods/87237f3a-08f9-46a9-930e-a3ca7bf1cd9f/volumes" Nov 24 21:37:58 crc kubenswrapper[4801]: I1124 21:37:58.704251 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d70dfa-a9e5-417b-9506-95bd490da3ef" path="/var/lib/kubelet/pods/b7d70dfa-a9e5-417b-9506-95bd490da3ef/volumes" Nov 24 21:38:02 crc kubenswrapper[4801]: I1124 21:38:02.083265 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mqjvg"] Nov 24 21:38:02 crc kubenswrapper[4801]: I1124 21:38:02.099692 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mqjvg"] Nov 24 21:38:02 crc kubenswrapper[4801]: I1124 21:38:02.693918 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa8d5ae-1109-4365-8360-ad5ac4bd8198" path="/var/lib/kubelet/pods/faa8d5ae-1109-4365-8360-ad5ac4bd8198/volumes" Nov 24 21:38:35 crc kubenswrapper[4801]: I1124 21:38:35.066215 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pzzvv"] Nov 24 21:38:35 crc kubenswrapper[4801]: I1124 21:38:35.081428 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pzzvv"] Nov 24 21:38:36 crc kubenswrapper[4801]: I1124 21:38:36.690814 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a48374-ecd3-49fb-bddb-d9afa60a4ac6" path="/var/lib/kubelet/pods/94a48374-ecd3-49fb-bddb-d9afa60a4ac6/volumes" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.054413 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-m42cd"] Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.067274 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q8zwc"] Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.079948 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-m42cd"] Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.089933 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q8zwc"] Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.539842 4801 scope.go:117] "RemoveContainer" containerID="a1ebcbb169350fb7019eb7660e9579fa8b4693f51417052daeb4ca7b23b233f6" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.591529 4801 scope.go:117] "RemoveContainer" containerID="c8d9195f5299383076b86de0a020ebe3e859b2ad5c743facf6d07f0ce65f0fea" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.655065 4801 scope.go:117] "RemoveContainer" containerID="34c83b35759f47d9c765a8eb6fb62071b13db6326b9aed2dede1a6b3849132ab" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.727104 4801 scope.go:117] "RemoveContainer" containerID="7d16d1d1dd95e06912c9264b33d7cad827fc2c1c8e48894f08c456eb4058e414" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.818871 4801 scope.go:117] "RemoveContainer" containerID="651d976016bdef41ced6683d410fd9347fd673ea02407ed76d22dd888b325f29" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.881146 4801 scope.go:117] "RemoveContainer" containerID="5219f2e69a8e0430149fe5c684e1cd7ec19f550414a83e1a4c49d22d3ee5ca12" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.959982 4801 scope.go:117] "RemoveContainer" containerID="44acd4245672b1699582211b97f3448966638a1333ee775e900900cf01a22a2a" Nov 24 21:38:45 crc kubenswrapper[4801]: I1124 21:38:45.990390 4801 scope.go:117] "RemoveContainer" containerID="49d3d4f6b082362338c9f4fdb755861e2f12c23bad176b88c48490518d77cb63" Nov 24 21:38:46 crc kubenswrapper[4801]: I1124 21:38:46.027787 4801 scope.go:117] "RemoveContainer" containerID="b4405df26d211ac6160aed2435615cea115a4c9de09a802e40831646ed7aca74" Nov 24 21:38:46 crc kubenswrapper[4801]: I1124 21:38:46.062889 4801 scope.go:117] "RemoveContainer" containerID="e8782d32b55f7efdbbdf83274b894c9abea362b690b0f5c30d4d1a52928c6104" Nov 24 21:38:46 crc kubenswrapper[4801]: I1124 21:38:46.098942 4801 scope.go:117] "RemoveContainer" containerID="750798ae3a0ed6ce1fc9e0c320acc3bb9ff19af3784c657fc6a9c3b592607bae" Nov 24 21:38:46 crc kubenswrapper[4801]: I1124 21:38:46.686651 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5" path="/var/lib/kubelet/pods/6ec51489-4b2b-4708-85ca-e1c6d9f2fcc5/volumes" Nov 24 21:38:46 crc kubenswrapper[4801]: I1124 21:38:46.689441 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d93222-44c6-4113-92f2-8a320aefbd82" path="/var/lib/kubelet/pods/c5d93222-44c6-4113-92f2-8a320aefbd82/volumes" Nov 24 21:38:56 crc kubenswrapper[4801]: I1124 21:38:56.043388 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kmchz"] Nov 24 21:38:56 crc kubenswrapper[4801]: I1124 21:38:56.064908 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kmchz"] Nov 24 21:38:56 crc kubenswrapper[4801]: I1124 21:38:56.683796 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953c737e-024f-41ba-9544-d1238b75519c" path="/var/lib/kubelet/pods/953c737e-024f-41ba-9544-d1238b75519c/volumes" Nov 24 21:38:58 crc kubenswrapper[4801]: I1124 21:38:58.090596 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ccwmw"] Nov 24 21:38:58 crc kubenswrapper[4801]: I1124 21:38:58.106520 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ccwmw"] Nov 24 21:38:58 crc kubenswrapper[4801]: I1124 21:38:58.680037 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3b859c-0916-4b01-a41f-0b9fd4d8b204" path="/var/lib/kubelet/pods/3f3b859c-0916-4b01-a41f-0b9fd4d8b204/volumes" Nov 24 21:39:29 crc kubenswrapper[4801]: I1124 21:39:29.540823 4801 generic.go:334] "Generic (PLEG): container finished" podID="bbed0308-b82c-4512-8391-a46502977b63" containerID="93feab4ed83a361bd569ebd3f2652ef02a7d31f51a5ec05373a129b3a8a75827" exitCode=0 Nov 24 21:39:29 crc kubenswrapper[4801]: I1124 21:39:29.540933 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" event={"ID":"bbed0308-b82c-4512-8391-a46502977b63","Type":"ContainerDied","Data":"93feab4ed83a361bd569ebd3f2652ef02a7d31f51a5ec05373a129b3a8a75827"} Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.082293 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.172954 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-inventory\") pod \"bbed0308-b82c-4512-8391-a46502977b63\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.173052 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-ssh-key\") pod \"bbed0308-b82c-4512-8391-a46502977b63\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.173089 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stzbr\" (UniqueName: \"kubernetes.io/projected/bbed0308-b82c-4512-8391-a46502977b63-kube-api-access-stzbr\") pod \"bbed0308-b82c-4512-8391-a46502977b63\" (UID: \"bbed0308-b82c-4512-8391-a46502977b63\") " Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.181736 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbed0308-b82c-4512-8391-a46502977b63-kube-api-access-stzbr" (OuterVolumeSpecName: "kube-api-access-stzbr") pod "bbed0308-b82c-4512-8391-a46502977b63" (UID: "bbed0308-b82c-4512-8391-a46502977b63"). InnerVolumeSpecName "kube-api-access-stzbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.214583 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-inventory" (OuterVolumeSpecName: "inventory") pod "bbed0308-b82c-4512-8391-a46502977b63" (UID: "bbed0308-b82c-4512-8391-a46502977b63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.223321 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bbed0308-b82c-4512-8391-a46502977b63" (UID: "bbed0308-b82c-4512-8391-a46502977b63"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.276553 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.276585 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbed0308-b82c-4512-8391-a46502977b63-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.276595 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stzbr\" (UniqueName: \"kubernetes.io/projected/bbed0308-b82c-4512-8391-a46502977b63-kube-api-access-stzbr\") on node \"crc\" DevicePath \"\"" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.586509 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" event={"ID":"bbed0308-b82c-4512-8391-a46502977b63","Type":"ContainerDied","Data":"3aab44c8ea735d9762b708361f0123b42243ebead8d26ddfdc6a4a8c4e52c6e4"} Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.586566 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aab44c8ea735d9762b708361f0123b42243ebead8d26ddfdc6a4a8c4e52c6e4" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.586648 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tv56m" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.789763 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb"] Nov 24 21:39:31 crc kubenswrapper[4801]: E1124 21:39:31.791010 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbed0308-b82c-4512-8391-a46502977b63" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.791034 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbed0308-b82c-4512-8391-a46502977b63" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.791344 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbed0308-b82c-4512-8391-a46502977b63" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.794163 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.804064 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.804064 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.804732 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.804955 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.828334 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb"] Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.894704 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.895000 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.895039 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn8mc\" (UniqueName: \"kubernetes.io/projected/821b772b-2d36-4166-bfff-48e9f623f3bf-kube-api-access-rn8mc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:31 crc kubenswrapper[4801]: I1124 21:39:31.999774 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:31.999875 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn8mc\" (UniqueName: \"kubernetes.io/projected/821b772b-2d36-4166-bfff-48e9f623f3bf-kube-api-access-rn8mc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:32.000074 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:32.006297 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:32.008067 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:32.022020 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn8mc\" (UniqueName: \"kubernetes.io/projected/821b772b-2d36-4166-bfff-48e9f623f3bf-kube-api-access-rn8mc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:32.122931 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:39:32 crc kubenswrapper[4801]: I1124 21:39:32.867843 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb"] Nov 24 21:39:33 crc kubenswrapper[4801]: I1124 21:39:33.617752 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" event={"ID":"821b772b-2d36-4166-bfff-48e9f623f3bf","Type":"ContainerStarted","Data":"1956962499dcb720cd25be0fe9b5cc59978ba6a07c7ff6539237243048134e1f"} Nov 24 21:39:34 crc kubenswrapper[4801]: I1124 21:39:34.631562 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" event={"ID":"821b772b-2d36-4166-bfff-48e9f623f3bf","Type":"ContainerStarted","Data":"669d4a819436206d5a4ab0c46b31c5324dd73f64ebaba39b7fbda27dd96c04d7"} Nov 24 21:39:34 crc kubenswrapper[4801]: I1124 21:39:34.657468 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" podStartSLOduration=3.103743003 podStartE2EDuration="3.657432065s" podCreationTimestamp="2025-11-24 21:39:31 +0000 UTC" firstStartedPulling="2025-11-24 21:39:32.87516734 +0000 UTC m=+1944.957754030" lastFinishedPulling="2025-11-24 21:39:33.428856412 +0000 UTC m=+1945.511443092" observedRunningTime="2025-11-24 21:39:34.650479338 +0000 UTC m=+1946.733066008" watchObservedRunningTime="2025-11-24 21:39:34.657432065 +0000 UTC m=+1946.740018775" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.080310 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qfhn4"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.099553 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-474vq"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.114914 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cabb-account-create-8scsl"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.125617 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-474vq"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.137262 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-t4fbs"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.149860 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qfhn4"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.161725 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cabb-account-create-8scsl"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.172815 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-t4fbs"] Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.431813 4801 scope.go:117] "RemoveContainer" containerID="1e8b23c4d2a53f58360dc0d64cdb399e8cebe35e4cd2e2288bcc5e3042130974" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.491324 4801 scope.go:117] "RemoveContainer" containerID="60cbb024dccdf49d1b0f29f479ba4d1340fcb5e25d1525166462eb04da510d06" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.550809 4801 scope.go:117] "RemoveContainer" containerID="78c100024e34c28f88c08b3b79101e04813f52ddc0319bc3fedfd1e07eb73a30" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.638091 4801 scope.go:117] "RemoveContainer" containerID="60128802e180a48f9e9316805a6bd0957198ae71639de8e371a835d8cadcb352" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.684582 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd" path="/var/lib/kubelet/pods/1a2d27a8-1962-4a8f-8c2c-b65fa371f0dd/volumes" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.686088 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e075bbc-6e1b-4d86-a6c2-ae3de8941695" path="/var/lib/kubelet/pods/2e075bbc-6e1b-4d86-a6c2-ae3de8941695/volumes" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.687018 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f05c39-1d67-4f70-a64f-fd2d8e160a58" path="/var/lib/kubelet/pods/80f05c39-1d67-4f70-a64f-fd2d8e160a58/volumes" Nov 24 21:39:46 crc kubenswrapper[4801]: I1124 21:39:46.688456 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d" path="/var/lib/kubelet/pods/f5de8675-66cf-41ac-9d83-aeeb7a0b6d4d/volumes" Nov 24 21:39:48 crc kubenswrapper[4801]: I1124 21:39:48.059550 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ef6a-account-create-d9b7h"] Nov 24 21:39:48 crc kubenswrapper[4801]: I1124 21:39:48.072612 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f95e-account-create-zl2r5"] Nov 24 21:39:48 crc kubenswrapper[4801]: I1124 21:39:48.084644 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ef6a-account-create-d9b7h"] Nov 24 21:39:48 crc kubenswrapper[4801]: I1124 21:39:48.095790 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f95e-account-create-zl2r5"] Nov 24 21:39:48 crc kubenswrapper[4801]: I1124 21:39:48.685028 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f744fe-61c0-4179-b7c2-406d3255d02c" path="/var/lib/kubelet/pods/03f744fe-61c0-4179-b7c2-406d3255d02c/volumes" Nov 24 21:39:48 crc kubenswrapper[4801]: I1124 21:39:48.686936 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1ce2cf-197e-41ee-abe5-652d675d160f" path="/var/lib/kubelet/pods/db1ce2cf-197e-41ee-abe5-652d675d160f/volumes" Nov 24 21:39:54 crc kubenswrapper[4801]: I1124 21:39:54.320035 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:39:54 crc kubenswrapper[4801]: I1124 21:39:54.321484 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:40:24 crc kubenswrapper[4801]: I1124 21:40:24.320572 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:40:24 crc kubenswrapper[4801]: I1124 21:40:24.321468 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:40:41 crc kubenswrapper[4801]: I1124 21:40:41.045972 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m29qg"] Nov 24 21:40:41 crc kubenswrapper[4801]: I1124 21:40:41.056533 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m29qg"] Nov 24 21:40:42 crc kubenswrapper[4801]: I1124 21:40:42.678991 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccceb717-3d31-47bd-a9af-983a5a247278" path="/var/lib/kubelet/pods/ccceb717-3d31-47bd-a9af-983a5a247278/volumes" Nov 24 21:40:45 crc kubenswrapper[4801]: I1124 21:40:45.046098 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-v8w6x"] Nov 24 21:40:45 crc kubenswrapper[4801]: I1124 21:40:45.062149 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-v8w6x"] Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.054096 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-92dd-account-create-frmpq"] Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.070269 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-92dd-account-create-frmpq"] Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.678454 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="209d129f-a849-4435-b353-77db8299bb51" path="/var/lib/kubelet/pods/209d129f-a849-4435-b353-77db8299bb51/volumes" Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.679200 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687bf50f-6505-48a5-ae26-7b0fbaf6da04" path="/var/lib/kubelet/pods/687bf50f-6505-48a5-ae26-7b0fbaf6da04/volumes" Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.848117 4801 scope.go:117] "RemoveContainer" containerID="8385206f427c184de8ec2b3fb01dccad41edaeec6aa3612792b2b43dff97dd0b" Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.881595 4801 scope.go:117] "RemoveContainer" containerID="e626d596dc6dad2c961980cb46a4b79dc85a7e7cebda9bed908a633904b537c4" Nov 24 21:40:46 crc kubenswrapper[4801]: I1124 21:40:46.962666 4801 scope.go:117] "RemoveContainer" containerID="2c052e48e13ed5147d943bde248a33072b94719409e2e6e97569ed947aaa4f10" Nov 24 21:40:47 crc kubenswrapper[4801]: I1124 21:40:47.036903 4801 scope.go:117] "RemoveContainer" containerID="86b6beec5b549f854c1ff6db8775b70d3d8e02222ba1cd0b37eb601fabcc672d" Nov 24 21:40:47 crc kubenswrapper[4801]: I1124 21:40:47.086240 4801 scope.go:117] "RemoveContainer" containerID="3da80d0a2ee04ecfe2af360b41cf934a4ef5ba8ee4feb894dd4fbb1b81d96ec8" Nov 24 21:40:47 crc kubenswrapper[4801]: I1124 21:40:47.151009 4801 scope.go:117] "RemoveContainer" containerID="12fdaea0a45081b60c5dfc618e645014f4d19e001ed21add99965fd0c231392e" Nov 24 21:40:47 crc kubenswrapper[4801]: I1124 21:40:47.214966 4801 scope.go:117] "RemoveContainer" containerID="4e51a3d93a1fc66c780608126f6c735a668c372583e6ae7a71b533335eb4c056" Nov 24 21:40:47 crc kubenswrapper[4801]: I1124 21:40:47.245107 4801 scope.go:117] "RemoveContainer" containerID="99b582b2fe5975c7a35ad36ccf6a0b081f22b7c1e29dc3f4689060380f327e39" Nov 24 21:40:47 crc kubenswrapper[4801]: I1124 21:40:47.273800 4801 scope.go:117] "RemoveContainer" containerID="d96577b4d9e28a423bfe3b2de2d0a36299c0d16403c06338d62681f784a5b363" Nov 24 21:40:50 crc kubenswrapper[4801]: I1124 21:40:50.715954 4801 generic.go:334] "Generic (PLEG): container finished" podID="821b772b-2d36-4166-bfff-48e9f623f3bf" containerID="669d4a819436206d5a4ab0c46b31c5324dd73f64ebaba39b7fbda27dd96c04d7" exitCode=0 Nov 24 21:40:50 crc kubenswrapper[4801]: I1124 21:40:50.716059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" event={"ID":"821b772b-2d36-4166-bfff-48e9f623f3bf","Type":"ContainerDied","Data":"669d4a819436206d5a4ab0c46b31c5324dd73f64ebaba39b7fbda27dd96c04d7"} Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.297828 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.301655 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-ssh-key\") pod \"821b772b-2d36-4166-bfff-48e9f623f3bf\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.302409 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn8mc\" (UniqueName: \"kubernetes.io/projected/821b772b-2d36-4166-bfff-48e9f623f3bf-kube-api-access-rn8mc\") pod \"821b772b-2d36-4166-bfff-48e9f623f3bf\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.302606 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-inventory\") pod \"821b772b-2d36-4166-bfff-48e9f623f3bf\" (UID: \"821b772b-2d36-4166-bfff-48e9f623f3bf\") " Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.309552 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821b772b-2d36-4166-bfff-48e9f623f3bf-kube-api-access-rn8mc" (OuterVolumeSpecName: "kube-api-access-rn8mc") pod "821b772b-2d36-4166-bfff-48e9f623f3bf" (UID: "821b772b-2d36-4166-bfff-48e9f623f3bf"). InnerVolumeSpecName "kube-api-access-rn8mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.373899 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-inventory" (OuterVolumeSpecName: "inventory") pod "821b772b-2d36-4166-bfff-48e9f623f3bf" (UID: "821b772b-2d36-4166-bfff-48e9f623f3bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.376754 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "821b772b-2d36-4166-bfff-48e9f623f3bf" (UID: "821b772b-2d36-4166-bfff-48e9f623f3bf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.404994 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.405035 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn8mc\" (UniqueName: \"kubernetes.io/projected/821b772b-2d36-4166-bfff-48e9f623f3bf-kube-api-access-rn8mc\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.405080 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/821b772b-2d36-4166-bfff-48e9f623f3bf-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.748726 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" event={"ID":"821b772b-2d36-4166-bfff-48e9f623f3bf","Type":"ContainerDied","Data":"1956962499dcb720cd25be0fe9b5cc59978ba6a07c7ff6539237243048134e1f"} Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.748781 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1956962499dcb720cd25be0fe9b5cc59978ba6a07c7ff6539237243048134e1f" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.748927 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.908689 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx"] Nov 24 21:40:52 crc kubenswrapper[4801]: E1124 21:40:52.909848 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821b772b-2d36-4166-bfff-48e9f623f3bf" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.909884 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="821b772b-2d36-4166-bfff-48e9f623f3bf" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.910421 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="821b772b-2d36-4166-bfff-48e9f623f3bf" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.912186 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.918060 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.918478 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.918582 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.918706 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:40:52 crc kubenswrapper[4801]: I1124 21:40:52.933914 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx"] Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.023473 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.023668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/0a88366c-083c-452a-b097-2087766abeb3-kube-api-access-8nlfv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.025674 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.130762 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.131006 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.131085 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/0a88366c-083c-452a-b097-2087766abeb3-kube-api-access-8nlfv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.136897 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.138093 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.159493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/0a88366c-083c-452a-b097-2087766abeb3-kube-api-access-8nlfv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.244532 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:40:53 crc kubenswrapper[4801]: I1124 21:40:53.962244 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx"] Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.320125 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.320187 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.320239 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.321232 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e95851a7e4adfaa39a5f53fbd943a83874eab98df94481a53956c3ef5883efd6"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.321322 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://e95851a7e4adfaa39a5f53fbd943a83874eab98df94481a53956c3ef5883efd6" gracePeriod=600 Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.781382 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="e95851a7e4adfaa39a5f53fbd943a83874eab98df94481a53956c3ef5883efd6" exitCode=0 Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.781468 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"e95851a7e4adfaa39a5f53fbd943a83874eab98df94481a53956c3ef5883efd6"} Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.781969 4801 scope.go:117] "RemoveContainer" containerID="9a8bad6e61c5834dcec17ff06cc9b4e1213b5869b38a54566aea8d19375b37c6" Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.787062 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" event={"ID":"0a88366c-083c-452a-b097-2087766abeb3","Type":"ContainerStarted","Data":"a3b14c637c91bf9e0f6a8b796b29fd28a6b7b29c44152c6b44cb457440c781ce"} Nov 24 21:40:54 crc kubenswrapper[4801]: I1124 21:40:54.847352 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" podStartSLOduration=2.403456674 podStartE2EDuration="2.847328413s" podCreationTimestamp="2025-11-24 21:40:52 +0000 UTC" firstStartedPulling="2025-11-24 21:40:53.982577355 +0000 UTC m=+2026.065164025" lastFinishedPulling="2025-11-24 21:40:54.426449084 +0000 UTC m=+2026.509035764" observedRunningTime="2025-11-24 21:40:54.840753998 +0000 UTC m=+2026.923340678" watchObservedRunningTime="2025-11-24 21:40:54.847328413 +0000 UTC m=+2026.929915083" Nov 24 21:40:55 crc kubenswrapper[4801]: I1124 21:40:55.810549 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88"} Nov 24 21:40:55 crc kubenswrapper[4801]: I1124 21:40:55.812922 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" event={"ID":"0a88366c-083c-452a-b097-2087766abeb3","Type":"ContainerStarted","Data":"3756951da8056e6160fd491f0e385a02a7a8cb6b9f6c271649c1fb0501fd74cd"} Nov 24 21:41:00 crc kubenswrapper[4801]: I1124 21:41:00.893515 4801 generic.go:334] "Generic (PLEG): container finished" podID="0a88366c-083c-452a-b097-2087766abeb3" containerID="3756951da8056e6160fd491f0e385a02a7a8cb6b9f6c271649c1fb0501fd74cd" exitCode=0 Nov 24 21:41:00 crc kubenswrapper[4801]: I1124 21:41:00.893690 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" event={"ID":"0a88366c-083c-452a-b097-2087766abeb3","Type":"ContainerDied","Data":"3756951da8056e6160fd491f0e385a02a7a8cb6b9f6c271649c1fb0501fd74cd"} Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.511620 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.626789 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/0a88366c-083c-452a-b097-2087766abeb3-kube-api-access-8nlfv\") pod \"0a88366c-083c-452a-b097-2087766abeb3\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.627226 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-inventory\") pod \"0a88366c-083c-452a-b097-2087766abeb3\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.627328 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-ssh-key\") pod \"0a88366c-083c-452a-b097-2087766abeb3\" (UID: \"0a88366c-083c-452a-b097-2087766abeb3\") " Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.661562 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a88366c-083c-452a-b097-2087766abeb3-kube-api-access-8nlfv" (OuterVolumeSpecName: "kube-api-access-8nlfv") pod "0a88366c-083c-452a-b097-2087766abeb3" (UID: "0a88366c-083c-452a-b097-2087766abeb3"). InnerVolumeSpecName "kube-api-access-8nlfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.701578 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-inventory" (OuterVolumeSpecName: "inventory") pod "0a88366c-083c-452a-b097-2087766abeb3" (UID: "0a88366c-083c-452a-b097-2087766abeb3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.701843 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0a88366c-083c-452a-b097-2087766abeb3" (UID: "0a88366c-083c-452a-b097-2087766abeb3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.730981 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.731008 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a88366c-083c-452a-b097-2087766abeb3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.731018 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/0a88366c-083c-452a-b097-2087766abeb3-kube-api-access-8nlfv\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.925212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" event={"ID":"0a88366c-083c-452a-b097-2087766abeb3","Type":"ContainerDied","Data":"a3b14c637c91bf9e0f6a8b796b29fd28a6b7b29c44152c6b44cb457440c781ce"} Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.925786 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b14c637c91bf9e0f6a8b796b29fd28a6b7b29c44152c6b44cb457440c781ce" Nov 24 21:41:02 crc kubenswrapper[4801]: I1124 21:41:02.925272 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.014375 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd"] Nov 24 21:41:03 crc kubenswrapper[4801]: E1124 21:41:03.014972 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a88366c-083c-452a-b097-2087766abeb3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.014996 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a88366c-083c-452a-b097-2087766abeb3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.015258 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a88366c-083c-452a-b097-2087766abeb3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.016222 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.019748 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.019805 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.020219 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.020419 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.041314 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd"] Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.143668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.144303 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.144440 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdr7g\" (UniqueName: \"kubernetes.io/projected/0664fc9e-325e-495e-9a4d-342fdebda59c-kube-api-access-jdr7g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.246838 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.247027 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.247073 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdr7g\" (UniqueName: \"kubernetes.io/projected/0664fc9e-325e-495e-9a4d-342fdebda59c-kube-api-access-jdr7g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.252928 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.253331 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.264794 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdr7g\" (UniqueName: \"kubernetes.io/projected/0664fc9e-325e-495e-9a4d-342fdebda59c-kube-api-access-jdr7g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7rgnd\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:03 crc kubenswrapper[4801]: I1124 21:41:03.343727 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:04 crc kubenswrapper[4801]: I1124 21:41:04.017651 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd"] Nov 24 21:41:04 crc kubenswrapper[4801]: I1124 21:41:04.953905 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" event={"ID":"0664fc9e-325e-495e-9a4d-342fdebda59c","Type":"ContainerStarted","Data":"842ffa67995c7655417b3a433fab15a5ed2fe23f5cdbf7bcef9286868af15d63"} Nov 24 21:41:04 crc kubenswrapper[4801]: I1124 21:41:04.954516 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" event={"ID":"0664fc9e-325e-495e-9a4d-342fdebda59c","Type":"ContainerStarted","Data":"49ea077eaa6ed1e2c2425f073a32af74310ecec47adc908deb7a344e00ef5e97"} Nov 24 21:41:04 crc kubenswrapper[4801]: I1124 21:41:04.978273 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" podStartSLOduration=2.560678197 podStartE2EDuration="2.978210942s" podCreationTimestamp="2025-11-24 21:41:02 +0000 UTC" firstStartedPulling="2025-11-24 21:41:04.026910707 +0000 UTC m=+2036.109497377" lastFinishedPulling="2025-11-24 21:41:04.444443452 +0000 UTC m=+2036.527030122" observedRunningTime="2025-11-24 21:41:04.973511584 +0000 UTC m=+2037.056098284" watchObservedRunningTime="2025-11-24 21:41:04.978210942 +0000 UTC m=+2037.060797652" Nov 24 21:41:12 crc kubenswrapper[4801]: I1124 21:41:12.075280 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-prbd8"] Nov 24 21:41:12 crc kubenswrapper[4801]: I1124 21:41:12.090463 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r8zks"] Nov 24 21:41:12 crc kubenswrapper[4801]: I1124 21:41:12.105554 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-prbd8"] Nov 24 21:41:12 crc kubenswrapper[4801]: I1124 21:41:12.116903 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r8zks"] Nov 24 21:41:12 crc kubenswrapper[4801]: I1124 21:41:12.682333 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf84f5e0-9d3c-4023-80f7-e84a6c810221" path="/var/lib/kubelet/pods/cf84f5e0-9d3c-4023-80f7-e84a6c810221/volumes" Nov 24 21:41:12 crc kubenswrapper[4801]: I1124 21:41:12.683411 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee263086-3eef-4ad1-903d-d7c18a90028f" path="/var/lib/kubelet/pods/ee263086-3eef-4ad1-903d-d7c18a90028f/volumes" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.630616 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqpcd"] Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.635091 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.657062 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqpcd"] Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.761101 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-utilities\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.761589 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r76qc\" (UniqueName: \"kubernetes.io/projected/24af5cae-1abe-47eb-8f6c-1968eee443e8-kube-api-access-r76qc\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.761645 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-catalog-content\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.876888 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-catalog-content\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.877691 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-utilities\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.878102 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r76qc\" (UniqueName: \"kubernetes.io/projected/24af5cae-1abe-47eb-8f6c-1968eee443e8-kube-api-access-r76qc\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.879306 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-catalog-content\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.879317 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-utilities\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:27 crc kubenswrapper[4801]: I1124 21:41:27.913970 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r76qc\" (UniqueName: \"kubernetes.io/projected/24af5cae-1abe-47eb-8f6c-1968eee443e8-kube-api-access-r76qc\") pod \"redhat-operators-zqpcd\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:28 crc kubenswrapper[4801]: I1124 21:41:28.031214 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:28 crc kubenswrapper[4801]: I1124 21:41:28.531956 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqpcd"] Nov 24 21:41:28 crc kubenswrapper[4801]: W1124 21:41:28.537612 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24af5cae_1abe_47eb_8f6c_1968eee443e8.slice/crio-ba3bf2b8200b50d03ae1e9a61551bb98e287bac7e2bc5942f31031edea5f40c7 WatchSource:0}: Error finding container ba3bf2b8200b50d03ae1e9a61551bb98e287bac7e2bc5942f31031edea5f40c7: Status 404 returned error can't find the container with id ba3bf2b8200b50d03ae1e9a61551bb98e287bac7e2bc5942f31031edea5f40c7 Nov 24 21:41:29 crc kubenswrapper[4801]: I1124 21:41:29.409192 4801 generic.go:334] "Generic (PLEG): container finished" podID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerID="c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238" exitCode=0 Nov 24 21:41:29 crc kubenswrapper[4801]: I1124 21:41:29.409251 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerDied","Data":"c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238"} Nov 24 21:41:29 crc kubenswrapper[4801]: I1124 21:41:29.409975 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerStarted","Data":"ba3bf2b8200b50d03ae1e9a61551bb98e287bac7e2bc5942f31031edea5f40c7"} Nov 24 21:41:30 crc kubenswrapper[4801]: I1124 21:41:30.426214 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerStarted","Data":"9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1"} Nov 24 21:41:34 crc kubenswrapper[4801]: I1124 21:41:34.497247 4801 generic.go:334] "Generic (PLEG): container finished" podID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerID="9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1" exitCode=0 Nov 24 21:41:34 crc kubenswrapper[4801]: I1124 21:41:34.497340 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerDied","Data":"9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1"} Nov 24 21:41:35 crc kubenswrapper[4801]: I1124 21:41:35.515212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerStarted","Data":"8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7"} Nov 24 21:41:35 crc kubenswrapper[4801]: I1124 21:41:35.554245 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqpcd" podStartSLOduration=3.018289052 podStartE2EDuration="8.554213561s" podCreationTimestamp="2025-11-24 21:41:27 +0000 UTC" firstStartedPulling="2025-11-24 21:41:29.41217691 +0000 UTC m=+2061.494763580" lastFinishedPulling="2025-11-24 21:41:34.948101409 +0000 UTC m=+2067.030688089" observedRunningTime="2025-11-24 21:41:35.545269461 +0000 UTC m=+2067.627856131" watchObservedRunningTime="2025-11-24 21:41:35.554213561 +0000 UTC m=+2067.636800231" Nov 24 21:41:38 crc kubenswrapper[4801]: I1124 21:41:38.031915 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:38 crc kubenswrapper[4801]: I1124 21:41:38.032909 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:41:39 crc kubenswrapper[4801]: I1124 21:41:39.091449 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqpcd" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" probeResult="failure" output=< Nov 24 21:41:39 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:41:39 crc kubenswrapper[4801]: > Nov 24 21:41:47 crc kubenswrapper[4801]: I1124 21:41:47.535913 4801 scope.go:117] "RemoveContainer" containerID="fe910383ce98b07da84698872943eb54c77b103af981085060792eb29cea0f11" Nov 24 21:41:47 crc kubenswrapper[4801]: I1124 21:41:47.585833 4801 scope.go:117] "RemoveContainer" containerID="eb3334215a7d6b369544df16d3f5062ed4500b8177d1f1fd71e4d0f83bd39a88" Nov 24 21:41:48 crc kubenswrapper[4801]: I1124 21:41:48.700527 4801 generic.go:334] "Generic (PLEG): container finished" podID="0664fc9e-325e-495e-9a4d-342fdebda59c" containerID="842ffa67995c7655417b3a433fab15a5ed2fe23f5cdbf7bcef9286868af15d63" exitCode=0 Nov 24 21:41:48 crc kubenswrapper[4801]: I1124 21:41:48.700628 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" event={"ID":"0664fc9e-325e-495e-9a4d-342fdebda59c","Type":"ContainerDied","Data":"842ffa67995c7655417b3a433fab15a5ed2fe23f5cdbf7bcef9286868af15d63"} Nov 24 21:41:49 crc kubenswrapper[4801]: I1124 21:41:49.143582 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqpcd" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" probeResult="failure" output=< Nov 24 21:41:49 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:41:49 crc kubenswrapper[4801]: > Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.346565 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.484743 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdr7g\" (UniqueName: \"kubernetes.io/projected/0664fc9e-325e-495e-9a4d-342fdebda59c-kube-api-access-jdr7g\") pod \"0664fc9e-325e-495e-9a4d-342fdebda59c\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.484922 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-inventory\") pod \"0664fc9e-325e-495e-9a4d-342fdebda59c\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.485055 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-ssh-key\") pod \"0664fc9e-325e-495e-9a4d-342fdebda59c\" (UID: \"0664fc9e-325e-495e-9a4d-342fdebda59c\") " Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.495778 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0664fc9e-325e-495e-9a4d-342fdebda59c-kube-api-access-jdr7g" (OuterVolumeSpecName: "kube-api-access-jdr7g") pod "0664fc9e-325e-495e-9a4d-342fdebda59c" (UID: "0664fc9e-325e-495e-9a4d-342fdebda59c"). InnerVolumeSpecName "kube-api-access-jdr7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.527490 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-inventory" (OuterVolumeSpecName: "inventory") pod "0664fc9e-325e-495e-9a4d-342fdebda59c" (UID: "0664fc9e-325e-495e-9a4d-342fdebda59c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.550417 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0664fc9e-325e-495e-9a4d-342fdebda59c" (UID: "0664fc9e-325e-495e-9a4d-342fdebda59c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.588651 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdr7g\" (UniqueName: \"kubernetes.io/projected/0664fc9e-325e-495e-9a4d-342fdebda59c-kube-api-access-jdr7g\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.588688 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.588698 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0664fc9e-325e-495e-9a4d-342fdebda59c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.733268 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" event={"ID":"0664fc9e-325e-495e-9a4d-342fdebda59c","Type":"ContainerDied","Data":"49ea077eaa6ed1e2c2425f073a32af74310ecec47adc908deb7a344e00ef5e97"} Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.733324 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ea077eaa6ed1e2c2425f073a32af74310ecec47adc908deb7a344e00ef5e97" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.733386 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7rgnd" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.851328 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64"] Nov 24 21:41:50 crc kubenswrapper[4801]: E1124 21:41:50.851962 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0664fc9e-325e-495e-9a4d-342fdebda59c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.851990 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0664fc9e-325e-495e-9a4d-342fdebda59c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.852228 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0664fc9e-325e-495e-9a4d-342fdebda59c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.853249 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.856090 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.856377 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.856443 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.856664 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:41:50 crc kubenswrapper[4801]: I1124 21:41:50.884678 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64"] Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.000728 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.000927 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqlq\" (UniqueName: \"kubernetes.io/projected/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-kube-api-access-fmqlq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.001132 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.105938 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.106029 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqlq\" (UniqueName: \"kubernetes.io/projected/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-kube-api-access-fmqlq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.106079 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.116734 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.124506 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.128083 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqlq\" (UniqueName: \"kubernetes.io/projected/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-kube-api-access-fmqlq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c9n64\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.187393 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:41:51 crc kubenswrapper[4801]: I1124 21:41:51.833512 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64"] Nov 24 21:41:52 crc kubenswrapper[4801]: I1124 21:41:52.765101 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" event={"ID":"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87","Type":"ContainerStarted","Data":"04002ded8a754bd6c019b959cc7e6b9b65918b03e1d381a4059091045fd3d5ac"} Nov 24 21:41:52 crc kubenswrapper[4801]: I1124 21:41:52.765957 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" event={"ID":"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87","Type":"ContainerStarted","Data":"e0daf00baaa97a4c0eabee399966439201c8eb7e59b3ec9c339cf0a338eaf763"} Nov 24 21:41:57 crc kubenswrapper[4801]: I1124 21:41:57.044175 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" podStartSLOduration=6.646141536 podStartE2EDuration="7.04414585s" podCreationTimestamp="2025-11-24 21:41:50 +0000 UTC" firstStartedPulling="2025-11-24 21:41:51.835984778 +0000 UTC m=+2083.918571448" lastFinishedPulling="2025-11-24 21:41:52.233989092 +0000 UTC m=+2084.316575762" observedRunningTime="2025-11-24 21:41:52.785694202 +0000 UTC m=+2084.868280872" watchObservedRunningTime="2025-11-24 21:41:57.04414585 +0000 UTC m=+2089.126732520" Nov 24 21:41:57 crc kubenswrapper[4801]: I1124 21:41:57.053864 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z2j4p"] Nov 24 21:41:57 crc kubenswrapper[4801]: I1124 21:41:57.064487 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z2j4p"] Nov 24 21:41:58 crc kubenswrapper[4801]: I1124 21:41:58.679313 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532d5719-89fc-47bc-bab7-9afb76342bf3" path="/var/lib/kubelet/pods/532d5719-89fc-47bc-bab7-9afb76342bf3/volumes" Nov 24 21:41:59 crc kubenswrapper[4801]: I1124 21:41:59.143411 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqpcd" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" probeResult="failure" output=< Nov 24 21:41:59 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:41:59 crc kubenswrapper[4801]: > Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.743908 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qb5b2"] Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.751081 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.789132 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qb5b2"] Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.860102 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqb2\" (UniqueName: \"kubernetes.io/projected/c00c7598-386f-4399-b3e0-f41834b3034a-kube-api-access-shqb2\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.860437 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-utilities\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.860835 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-catalog-content\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.963579 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-utilities\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.963690 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-catalog-content\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.963812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqb2\" (UniqueName: \"kubernetes.io/projected/c00c7598-386f-4399-b3e0-f41834b3034a-kube-api-access-shqb2\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.964261 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-catalog-content\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.964256 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-utilities\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:00 crc kubenswrapper[4801]: I1124 21:42:00.995584 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqb2\" (UniqueName: \"kubernetes.io/projected/c00c7598-386f-4399-b3e0-f41834b3034a-kube-api-access-shqb2\") pod \"community-operators-qb5b2\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:01 crc kubenswrapper[4801]: I1124 21:42:01.096047 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:01 crc kubenswrapper[4801]: I1124 21:42:01.715361 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qb5b2"] Nov 24 21:42:01 crc kubenswrapper[4801]: I1124 21:42:01.909627 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerStarted","Data":"f96399967c25c395c7f84960a9a7a9bfe447ecf25ede457d4b77989d48a5812f"} Nov 24 21:42:02 crc kubenswrapper[4801]: I1124 21:42:02.932815 4801 generic.go:334] "Generic (PLEG): container finished" podID="c00c7598-386f-4399-b3e0-f41834b3034a" containerID="dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e" exitCode=0 Nov 24 21:42:02 crc kubenswrapper[4801]: I1124 21:42:02.933203 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerDied","Data":"dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e"} Nov 24 21:42:03 crc kubenswrapper[4801]: I1124 21:42:03.948747 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerStarted","Data":"818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe"} Nov 24 21:42:05 crc kubenswrapper[4801]: I1124 21:42:05.979623 4801 generic.go:334] "Generic (PLEG): container finished" podID="c00c7598-386f-4399-b3e0-f41834b3034a" containerID="818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe" exitCode=0 Nov 24 21:42:05 crc kubenswrapper[4801]: I1124 21:42:05.979727 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerDied","Data":"818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe"} Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.080746 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wqdf"] Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.085532 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.098683 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wqdf"] Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.135769 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-catalog-content\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.135919 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2n6f\" (UniqueName: \"kubernetes.io/projected/a8f528ba-5363-4056-883b-8a73f37dbe66-kube-api-access-c2n6f\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.136105 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-utilities\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.238740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2n6f\" (UniqueName: \"kubernetes.io/projected/a8f528ba-5363-4056-883b-8a73f37dbe66-kube-api-access-c2n6f\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.238979 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-utilities\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.239058 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-catalog-content\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.239729 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-utilities\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.239805 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-catalog-content\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.265581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2n6f\" (UniqueName: \"kubernetes.io/projected/a8f528ba-5363-4056-883b-8a73f37dbe66-kube-api-access-c2n6f\") pod \"certified-operators-4wqdf\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:06 crc kubenswrapper[4801]: I1124 21:42:06.426427 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:07 crc kubenswrapper[4801]: I1124 21:42:07.004776 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerStarted","Data":"e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29"} Nov 24 21:42:07 crc kubenswrapper[4801]: I1124 21:42:07.032467 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qb5b2" podStartSLOduration=3.583572116 podStartE2EDuration="7.032411849s" podCreationTimestamp="2025-11-24 21:42:00 +0000 UTC" firstStartedPulling="2025-11-24 21:42:02.938748054 +0000 UTC m=+2095.021334724" lastFinishedPulling="2025-11-24 21:42:06.387587747 +0000 UTC m=+2098.470174457" observedRunningTime="2025-11-24 21:42:07.03149223 +0000 UTC m=+2099.114078900" watchObservedRunningTime="2025-11-24 21:42:07.032411849 +0000 UTC m=+2099.114998519" Nov 24 21:42:07 crc kubenswrapper[4801]: W1124 21:42:07.157413 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8f528ba_5363_4056_883b_8a73f37dbe66.slice/crio-9536b2a1e26077710873ca80d00c571a29a213b6ebee4717a4d711e7bd9b500c WatchSource:0}: Error finding container 9536b2a1e26077710873ca80d00c571a29a213b6ebee4717a4d711e7bd9b500c: Status 404 returned error can't find the container with id 9536b2a1e26077710873ca80d00c571a29a213b6ebee4717a4d711e7bd9b500c Nov 24 21:42:07 crc kubenswrapper[4801]: I1124 21:42:07.157768 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wqdf"] Nov 24 21:42:08 crc kubenswrapper[4801]: I1124 21:42:08.022559 4801 generic.go:334] "Generic (PLEG): container finished" podID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerID="954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb" exitCode=0 Nov 24 21:42:08 crc kubenswrapper[4801]: I1124 21:42:08.022859 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerDied","Data":"954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb"} Nov 24 21:42:08 crc kubenswrapper[4801]: I1124 21:42:08.023239 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerStarted","Data":"9536b2a1e26077710873ca80d00c571a29a213b6ebee4717a4d711e7bd9b500c"} Nov 24 21:42:08 crc kubenswrapper[4801]: I1124 21:42:08.118320 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:42:08 crc kubenswrapper[4801]: I1124 21:42:08.184663 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:42:09 crc kubenswrapper[4801]: I1124 21:42:09.037998 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerStarted","Data":"7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0"} Nov 24 21:42:11 crc kubenswrapper[4801]: I1124 21:42:11.073508 4801 generic.go:334] "Generic (PLEG): container finished" podID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerID="7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0" exitCode=0 Nov 24 21:42:11 crc kubenswrapper[4801]: I1124 21:42:11.074000 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerDied","Data":"7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0"} Nov 24 21:42:11 crc kubenswrapper[4801]: I1124 21:42:11.096238 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:11 crc kubenswrapper[4801]: I1124 21:42:11.096805 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:11 crc kubenswrapper[4801]: I1124 21:42:11.174866 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.100465 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerStarted","Data":"068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224"} Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.136189 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wqdf" podStartSLOduration=2.6496418840000002 podStartE2EDuration="6.136153706s" podCreationTimestamp="2025-11-24 21:42:06 +0000 UTC" firstStartedPulling="2025-11-24 21:42:08.02648642 +0000 UTC m=+2100.109073090" lastFinishedPulling="2025-11-24 21:42:11.512998202 +0000 UTC m=+2103.595584912" observedRunningTime="2025-11-24 21:42:12.127707062 +0000 UTC m=+2104.210293742" watchObservedRunningTime="2025-11-24 21:42:12.136153706 +0000 UTC m=+2104.218740406" Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.186611 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.257433 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqpcd"] Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.258249 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqpcd" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" containerID="cri-o://8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7" gracePeriod=2 Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.815524 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.997350 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r76qc\" (UniqueName: \"kubernetes.io/projected/24af5cae-1abe-47eb-8f6c-1968eee443e8-kube-api-access-r76qc\") pod \"24af5cae-1abe-47eb-8f6c-1968eee443e8\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.997974 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-catalog-content\") pod \"24af5cae-1abe-47eb-8f6c-1968eee443e8\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " Nov 24 21:42:12 crc kubenswrapper[4801]: I1124 21:42:12.998215 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-utilities\") pod \"24af5cae-1abe-47eb-8f6c-1968eee443e8\" (UID: \"24af5cae-1abe-47eb-8f6c-1968eee443e8\") " Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.008037 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-utilities" (OuterVolumeSpecName: "utilities") pod "24af5cae-1abe-47eb-8f6c-1968eee443e8" (UID: "24af5cae-1abe-47eb-8f6c-1968eee443e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.008491 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24af5cae-1abe-47eb-8f6c-1968eee443e8-kube-api-access-r76qc" (OuterVolumeSpecName: "kube-api-access-r76qc") pod "24af5cae-1abe-47eb-8f6c-1968eee443e8" (UID: "24af5cae-1abe-47eb-8f6c-1968eee443e8"). InnerVolumeSpecName "kube-api-access-r76qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.102825 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.102871 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r76qc\" (UniqueName: \"kubernetes.io/projected/24af5cae-1abe-47eb-8f6c-1968eee443e8-kube-api-access-r76qc\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.118280 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24af5cae-1abe-47eb-8f6c-1968eee443e8" (UID: "24af5cae-1abe-47eb-8f6c-1968eee443e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.123850 4801 generic.go:334] "Generic (PLEG): container finished" podID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerID="8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7" exitCode=0 Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.124320 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpcd" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.125803 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerDied","Data":"8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7"} Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.125853 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpcd" event={"ID":"24af5cae-1abe-47eb-8f6c-1968eee443e8","Type":"ContainerDied","Data":"ba3bf2b8200b50d03ae1e9a61551bb98e287bac7e2bc5942f31031edea5f40c7"} Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.125883 4801 scope.go:117] "RemoveContainer" containerID="8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.162699 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqpcd"] Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.167998 4801 scope.go:117] "RemoveContainer" containerID="9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.182840 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqpcd"] Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.203902 4801 scope.go:117] "RemoveContainer" containerID="c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.205617 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af5cae-1abe-47eb-8f6c-1968eee443e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.250450 4801 scope.go:117] "RemoveContainer" containerID="8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7" Nov 24 21:42:13 crc kubenswrapper[4801]: E1124 21:42:13.250852 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7\": container with ID starting with 8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7 not found: ID does not exist" containerID="8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.250887 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7"} err="failed to get container status \"8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7\": rpc error: code = NotFound desc = could not find container \"8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7\": container with ID starting with 8e9c7d5a71574a372a65f504ca6ecb7e4eed210f08b3da23dadfd39254f6b6e7 not found: ID does not exist" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.250916 4801 scope.go:117] "RemoveContainer" containerID="9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1" Nov 24 21:42:13 crc kubenswrapper[4801]: E1124 21:42:13.251293 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1\": container with ID starting with 9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1 not found: ID does not exist" containerID="9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.251314 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1"} err="failed to get container status \"9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1\": rpc error: code = NotFound desc = could not find container \"9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1\": container with ID starting with 9227933cf52048e72724c2b14b60f31a957791e2dcb00f155e16d55d5287a6f1 not found: ID does not exist" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.251329 4801 scope.go:117] "RemoveContainer" containerID="c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238" Nov 24 21:42:13 crc kubenswrapper[4801]: E1124 21:42:13.251589 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238\": container with ID starting with c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238 not found: ID does not exist" containerID="c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238" Nov 24 21:42:13 crc kubenswrapper[4801]: I1124 21:42:13.251618 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238"} err="failed to get container status \"c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238\": rpc error: code = NotFound desc = could not find container \"c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238\": container with ID starting with c51028a1505e0aa215383419f3bf92fb8103e77b0d841ae583758a76d1dbc238 not found: ID does not exist" Nov 24 21:42:14 crc kubenswrapper[4801]: I1124 21:42:14.465214 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qb5b2"] Nov 24 21:42:14 crc kubenswrapper[4801]: I1124 21:42:14.684612 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" path="/var/lib/kubelet/pods/24af5cae-1abe-47eb-8f6c-1968eee443e8/volumes" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.154844 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qb5b2" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="registry-server" containerID="cri-o://e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29" gracePeriod=2 Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.797073 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.885381 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqb2\" (UniqueName: \"kubernetes.io/projected/c00c7598-386f-4399-b3e0-f41834b3034a-kube-api-access-shqb2\") pod \"c00c7598-386f-4399-b3e0-f41834b3034a\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.885754 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-catalog-content\") pod \"c00c7598-386f-4399-b3e0-f41834b3034a\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.886166 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-utilities\") pod \"c00c7598-386f-4399-b3e0-f41834b3034a\" (UID: \"c00c7598-386f-4399-b3e0-f41834b3034a\") " Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.887002 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-utilities" (OuterVolumeSpecName: "utilities") pod "c00c7598-386f-4399-b3e0-f41834b3034a" (UID: "c00c7598-386f-4399-b3e0-f41834b3034a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.895262 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00c7598-386f-4399-b3e0-f41834b3034a-kube-api-access-shqb2" (OuterVolumeSpecName: "kube-api-access-shqb2") pod "c00c7598-386f-4399-b3e0-f41834b3034a" (UID: "c00c7598-386f-4399-b3e0-f41834b3034a"). InnerVolumeSpecName "kube-api-access-shqb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.960262 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c00c7598-386f-4399-b3e0-f41834b3034a" (UID: "c00c7598-386f-4399-b3e0-f41834b3034a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.989358 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.989448 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shqb2\" (UniqueName: \"kubernetes.io/projected/c00c7598-386f-4399-b3e0-f41834b3034a-kube-api-access-shqb2\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:15 crc kubenswrapper[4801]: I1124 21:42:15.989465 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c7598-386f-4399-b3e0-f41834b3034a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.169491 4801 generic.go:334] "Generic (PLEG): container finished" podID="c00c7598-386f-4399-b3e0-f41834b3034a" containerID="e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29" exitCode=0 Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.169568 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerDied","Data":"e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29"} Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.169608 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb5b2" event={"ID":"c00c7598-386f-4399-b3e0-f41834b3034a","Type":"ContainerDied","Data":"f96399967c25c395c7f84960a9a7a9bfe447ecf25ede457d4b77989d48a5812f"} Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.169649 4801 scope.go:117] "RemoveContainer" containerID="e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.169890 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb5b2" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.208905 4801 scope.go:117] "RemoveContainer" containerID="818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.221025 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qb5b2"] Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.238751 4801 scope.go:117] "RemoveContainer" containerID="dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.239646 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qb5b2"] Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.305437 4801 scope.go:117] "RemoveContainer" containerID="e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29" Nov 24 21:42:16 crc kubenswrapper[4801]: E1124 21:42:16.306030 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29\": container with ID starting with e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29 not found: ID does not exist" containerID="e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.306121 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29"} err="failed to get container status \"e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29\": rpc error: code = NotFound desc = could not find container \"e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29\": container with ID starting with e2ff596969b3fe20b0814d83a1aed8e36e7b3fd010253101e44a6ea90fe9db29 not found: ID does not exist" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.306171 4801 scope.go:117] "RemoveContainer" containerID="818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe" Nov 24 21:42:16 crc kubenswrapper[4801]: E1124 21:42:16.306630 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe\": container with ID starting with 818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe not found: ID does not exist" containerID="818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.306672 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe"} err="failed to get container status \"818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe\": rpc error: code = NotFound desc = could not find container \"818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe\": container with ID starting with 818242a2dd3b45d368e668a118df9895367f5bda1532aed2aa25eb6fe68092fe not found: ID does not exist" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.306706 4801 scope.go:117] "RemoveContainer" containerID="dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e" Nov 24 21:42:16 crc kubenswrapper[4801]: E1124 21:42:16.306932 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e\": container with ID starting with dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e not found: ID does not exist" containerID="dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.306954 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e"} err="failed to get container status \"dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e\": rpc error: code = NotFound desc = could not find container \"dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e\": container with ID starting with dd7029cca2a65ddaad8dd32df2f9b69eff3cef6d9ab3ef6843ebb49d3ef8e20e not found: ID does not exist" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.426928 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.427244 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.501680 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:16 crc kubenswrapper[4801]: I1124 21:42:16.687115 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" path="/var/lib/kubelet/pods/c00c7598-386f-4399-b3e0-f41834b3034a/volumes" Nov 24 21:42:17 crc kubenswrapper[4801]: I1124 21:42:17.270132 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:18 crc kubenswrapper[4801]: I1124 21:42:18.661997 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wqdf"] Nov 24 21:42:20 crc kubenswrapper[4801]: I1124 21:42:20.257928 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wqdf" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="registry-server" containerID="cri-o://068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224" gracePeriod=2 Nov 24 21:42:20 crc kubenswrapper[4801]: I1124 21:42:20.930656 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:20 crc kubenswrapper[4801]: I1124 21:42:20.982317 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-catalog-content\") pod \"a8f528ba-5363-4056-883b-8a73f37dbe66\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " Nov 24 21:42:20 crc kubenswrapper[4801]: I1124 21:42:20.982635 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2n6f\" (UniqueName: \"kubernetes.io/projected/a8f528ba-5363-4056-883b-8a73f37dbe66-kube-api-access-c2n6f\") pod \"a8f528ba-5363-4056-883b-8a73f37dbe66\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " Nov 24 21:42:20 crc kubenswrapper[4801]: I1124 21:42:20.982801 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-utilities\") pod \"a8f528ba-5363-4056-883b-8a73f37dbe66\" (UID: \"a8f528ba-5363-4056-883b-8a73f37dbe66\") " Nov 24 21:42:20 crc kubenswrapper[4801]: I1124 21:42:20.984953 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-utilities" (OuterVolumeSpecName: "utilities") pod "a8f528ba-5363-4056-883b-8a73f37dbe66" (UID: "a8f528ba-5363-4056-883b-8a73f37dbe66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.002863 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f528ba-5363-4056-883b-8a73f37dbe66-kube-api-access-c2n6f" (OuterVolumeSpecName: "kube-api-access-c2n6f") pod "a8f528ba-5363-4056-883b-8a73f37dbe66" (UID: "a8f528ba-5363-4056-883b-8a73f37dbe66"). InnerVolumeSpecName "kube-api-access-c2n6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.045479 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8f528ba-5363-4056-883b-8a73f37dbe66" (UID: "a8f528ba-5363-4056-883b-8a73f37dbe66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.087914 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.087957 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f528ba-5363-4056-883b-8a73f37dbe66-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.087970 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2n6f\" (UniqueName: \"kubernetes.io/projected/a8f528ba-5363-4056-883b-8a73f37dbe66-kube-api-access-c2n6f\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.272140 4801 generic.go:334] "Generic (PLEG): container finished" podID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerID="068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224" exitCode=0 Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.272255 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqdf" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.272249 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerDied","Data":"068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224"} Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.273520 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqdf" event={"ID":"a8f528ba-5363-4056-883b-8a73f37dbe66","Type":"ContainerDied","Data":"9536b2a1e26077710873ca80d00c571a29a213b6ebee4717a4d711e7bd9b500c"} Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.273562 4801 scope.go:117] "RemoveContainer" containerID="068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.305894 4801 scope.go:117] "RemoveContainer" containerID="7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.338980 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wqdf"] Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.341679 4801 scope.go:117] "RemoveContainer" containerID="954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.358331 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wqdf"] Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.409920 4801 scope.go:117] "RemoveContainer" containerID="068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224" Nov 24 21:42:21 crc kubenswrapper[4801]: E1124 21:42:21.410660 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224\": container with ID starting with 068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224 not found: ID does not exist" containerID="068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.410729 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224"} err="failed to get container status \"068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224\": rpc error: code = NotFound desc = could not find container \"068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224\": container with ID starting with 068a25d96c9b80d519bfd032752b48816023590edc2e49827df143650acb1224 not found: ID does not exist" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.410776 4801 scope.go:117] "RemoveContainer" containerID="7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0" Nov 24 21:42:21 crc kubenswrapper[4801]: E1124 21:42:21.411179 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0\": container with ID starting with 7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0 not found: ID does not exist" containerID="7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.411224 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0"} err="failed to get container status \"7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0\": rpc error: code = NotFound desc = could not find container \"7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0\": container with ID starting with 7101b2d1a35513bacc351a951514cea18843fbe99a14bea7d253e63e6f62dde0 not found: ID does not exist" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.411252 4801 scope.go:117] "RemoveContainer" containerID="954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb" Nov 24 21:42:21 crc kubenswrapper[4801]: E1124 21:42:21.411680 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb\": container with ID starting with 954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb not found: ID does not exist" containerID="954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb" Nov 24 21:42:21 crc kubenswrapper[4801]: I1124 21:42:21.411702 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb"} err="failed to get container status \"954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb\": rpc error: code = NotFound desc = could not find container \"954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb\": container with ID starting with 954d7c9093ebcda1d9e4ff10131f0baad0050183f6b94d283317fb35c1efbbfb not found: ID does not exist" Nov 24 21:42:22 crc kubenswrapper[4801]: I1124 21:42:22.686805 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" path="/var/lib/kubelet/pods/a8f528ba-5363-4056-883b-8a73f37dbe66/volumes" Nov 24 21:42:47 crc kubenswrapper[4801]: I1124 21:42:47.770252 4801 scope.go:117] "RemoveContainer" containerID="38f50e7ff5ce3f16b80c107066d3ccc44fd234f3a1200415ea8158d398b251c1" Nov 24 21:42:52 crc kubenswrapper[4801]: I1124 21:42:52.728069 4801 generic.go:334] "Generic (PLEG): container finished" podID="f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" containerID="04002ded8a754bd6c019b959cc7e6b9b65918b03e1d381a4059091045fd3d5ac" exitCode=0 Nov 24 21:42:52 crc kubenswrapper[4801]: I1124 21:42:52.728651 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" event={"ID":"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87","Type":"ContainerDied","Data":"04002ded8a754bd6c019b959cc7e6b9b65918b03e1d381a4059091045fd3d5ac"} Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.302577 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.385881 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.385964 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.386280 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-ssh-key\") pod \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.386531 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmqlq\" (UniqueName: \"kubernetes.io/projected/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-kube-api-access-fmqlq\") pod \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.386709 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-inventory\") pod \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\" (UID: \"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87\") " Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.881482 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l5gnv"] Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882007 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="extract-content" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882028 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="extract-content" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882042 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882050 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882071 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="extract-utilities" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882077 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="extract-utilities" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882095 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="extract-content" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882102 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="extract-content" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882117 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882124 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882151 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882159 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882173 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="extract-content" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882180 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="extract-content" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882189 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="extract-utilities" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882197 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="extract-utilities" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882223 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882229 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: E1124 21:42:54.882239 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="extract-utilities" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882245 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="extract-utilities" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882479 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882503 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="24af5cae-1abe-47eb-8f6c-1968eee443e8" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882532 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00c7598-386f-4399-b3e0-f41834b3034a" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.882544 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f528ba-5363-4056-883b-8a73f37dbe66" containerName="registry-server" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.883569 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.896547 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l5gnv"] Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.922005 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.946964 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c9n64" event={"ID":"f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87","Type":"ContainerDied","Data":"e0daf00baaa97a4c0eabee399966439201c8eb7e59b3ec9c339cf0a338eaf763"} Nov 24 21:42:54 crc kubenswrapper[4801]: I1124 21:42:54.947026 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0daf00baaa97a4c0eabee399966439201c8eb7e59b3ec9c339cf0a338eaf763" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.003207 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmks\" (UniqueName: \"kubernetes.io/projected/35523e2b-7056-4e9c-b019-cb8cb6a8490b-kube-api-access-spmks\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.003340 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.003421 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.570453 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-kube-api-access-fmqlq" (OuterVolumeSpecName: "kube-api-access-fmqlq") pod "f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" (UID: "f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87"). InnerVolumeSpecName "kube-api-access-fmqlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.588873 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" (UID: "f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.648210 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-inventory" (OuterVolumeSpecName: "inventory") pod "f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87" (UID: "f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.663828 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.664174 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.664960 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmks\" (UniqueName: \"kubernetes.io/projected/35523e2b-7056-4e9c-b019-cb8cb6a8490b-kube-api-access-spmks\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.685424 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmqlq\" (UniqueName: \"kubernetes.io/projected/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-kube-api-access-fmqlq\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.694791 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.696345 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.698101 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.701461 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.719824 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmks\" (UniqueName: \"kubernetes.io/projected/35523e2b-7056-4e9c-b019-cb8cb6a8490b-kube-api-access-spmks\") pod \"ssh-known-hosts-edpm-deployment-l5gnv\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:55 crc kubenswrapper[4801]: I1124 21:42:55.837603 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:42:56 crc kubenswrapper[4801]: I1124 21:42:56.550343 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l5gnv"] Nov 24 21:42:56 crc kubenswrapper[4801]: I1124 21:42:56.559892 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:42:56 crc kubenswrapper[4801]: I1124 21:42:56.951356 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" event={"ID":"35523e2b-7056-4e9c-b019-cb8cb6a8490b","Type":"ContainerStarted","Data":"73fb915d235ccc9b63d32b34771fd6b402e5887c5e90a7316f0dfcf9806be57d"} Nov 24 21:42:57 crc kubenswrapper[4801]: I1124 21:42:57.964598 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" event={"ID":"35523e2b-7056-4e9c-b019-cb8cb6a8490b","Type":"ContainerStarted","Data":"34aa8794461c3c3711c44c352954f3182bbb3fc947e219f10cf0b11cbc546fec"} Nov 24 21:42:57 crc kubenswrapper[4801]: I1124 21:42:57.995804 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" podStartSLOduration=3.167718329 podStartE2EDuration="3.995777948s" podCreationTimestamp="2025-11-24 21:42:54 +0000 UTC" firstStartedPulling="2025-11-24 21:42:56.55939088 +0000 UTC m=+2148.641977570" lastFinishedPulling="2025-11-24 21:42:57.387450519 +0000 UTC m=+2149.470037189" observedRunningTime="2025-11-24 21:42:57.991663081 +0000 UTC m=+2150.074249761" watchObservedRunningTime="2025-11-24 21:42:57.995777948 +0000 UTC m=+2150.078364618" Nov 24 21:43:06 crc kubenswrapper[4801]: I1124 21:43:06.089739 4801 generic.go:334] "Generic (PLEG): container finished" podID="35523e2b-7056-4e9c-b019-cb8cb6a8490b" containerID="34aa8794461c3c3711c44c352954f3182bbb3fc947e219f10cf0b11cbc546fec" exitCode=0 Nov 24 21:43:06 crc kubenswrapper[4801]: I1124 21:43:06.089820 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" event={"ID":"35523e2b-7056-4e9c-b019-cb8cb6a8490b","Type":"ContainerDied","Data":"34aa8794461c3c3711c44c352954f3182bbb3fc947e219f10cf0b11cbc546fec"} Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.729472 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.791558 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-inventory-0\") pod \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.791683 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spmks\" (UniqueName: \"kubernetes.io/projected/35523e2b-7056-4e9c-b019-cb8cb6a8490b-kube-api-access-spmks\") pod \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.791769 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-ssh-key-openstack-edpm-ipam\") pod \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\" (UID: \"35523e2b-7056-4e9c-b019-cb8cb6a8490b\") " Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.808081 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35523e2b-7056-4e9c-b019-cb8cb6a8490b-kube-api-access-spmks" (OuterVolumeSpecName: "kube-api-access-spmks") pod "35523e2b-7056-4e9c-b019-cb8cb6a8490b" (UID: "35523e2b-7056-4e9c-b019-cb8cb6a8490b"). InnerVolumeSpecName "kube-api-access-spmks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.841491 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35523e2b-7056-4e9c-b019-cb8cb6a8490b" (UID: "35523e2b-7056-4e9c-b019-cb8cb6a8490b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.852594 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "35523e2b-7056-4e9c-b019-cb8cb6a8490b" (UID: "35523e2b-7056-4e9c-b019-cb8cb6a8490b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.897544 4801 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.897605 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spmks\" (UniqueName: \"kubernetes.io/projected/35523e2b-7056-4e9c-b019-cb8cb6a8490b-kube-api-access-spmks\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:07 crc kubenswrapper[4801]: I1124 21:43:07.897623 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35523e2b-7056-4e9c-b019-cb8cb6a8490b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.126140 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" event={"ID":"35523e2b-7056-4e9c-b019-cb8cb6a8490b","Type":"ContainerDied","Data":"73fb915d235ccc9b63d32b34771fd6b402e5887c5e90a7316f0dfcf9806be57d"} Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.126210 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73fb915d235ccc9b63d32b34771fd6b402e5887c5e90a7316f0dfcf9806be57d" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.126223 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l5gnv" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.264762 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh"] Nov 24 21:43:08 crc kubenswrapper[4801]: E1124 21:43:08.266689 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35523e2b-7056-4e9c-b019-cb8cb6a8490b" containerName="ssh-known-hosts-edpm-deployment" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.266723 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="35523e2b-7056-4e9c-b019-cb8cb6a8490b" containerName="ssh-known-hosts-edpm-deployment" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.267429 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="35523e2b-7056-4e9c-b019-cb8cb6a8490b" containerName="ssh-known-hosts-edpm-deployment" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.268752 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.272107 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.272734 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.272908 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.273146 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.324946 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh"] Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.337576 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzdl\" (UniqueName: \"kubernetes.io/projected/520f6efa-a70d-4702-83b8-0c621dfd3a8a-kube-api-access-4nzdl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.338191 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.338331 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.442009 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzdl\" (UniqueName: \"kubernetes.io/projected/520f6efa-a70d-4702-83b8-0c621dfd3a8a-kube-api-access-4nzdl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.442118 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.442158 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.449100 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.453949 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.462642 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzdl\" (UniqueName: \"kubernetes.io/projected/520f6efa-a70d-4702-83b8-0c621dfd3a8a-kube-api-access-4nzdl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cm7vh\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.609052 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:43:08 crc kubenswrapper[4801]: I1124 21:43:08.617325 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:09 crc kubenswrapper[4801]: I1124 21:43:09.285135 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh"] Nov 24 21:43:09 crc kubenswrapper[4801]: I1124 21:43:09.766520 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:43:10 crc kubenswrapper[4801]: I1124 21:43:10.156666 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" event={"ID":"520f6efa-a70d-4702-83b8-0c621dfd3a8a","Type":"ContainerStarted","Data":"f9b415daefe72b8ea03f7874cf8cb0f9d0f94aa7b4cea367a5a189f141f1bf7a"} Nov 24 21:43:10 crc kubenswrapper[4801]: I1124 21:43:10.156724 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" event={"ID":"520f6efa-a70d-4702-83b8-0c621dfd3a8a","Type":"ContainerStarted","Data":"d119aa59b1d7c64d078f056e6c2e22fd055140e5765c28d1d736668129fbb25b"} Nov 24 21:43:10 crc kubenswrapper[4801]: I1124 21:43:10.192125 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" podStartSLOduration=1.7178872649999999 podStartE2EDuration="2.192098499s" podCreationTimestamp="2025-11-24 21:43:08 +0000 UTC" firstStartedPulling="2025-11-24 21:43:09.286049971 +0000 UTC m=+2161.368636681" lastFinishedPulling="2025-11-24 21:43:09.760261245 +0000 UTC m=+2161.842847915" observedRunningTime="2025-11-24 21:43:10.181842289 +0000 UTC m=+2162.264428959" watchObservedRunningTime="2025-11-24 21:43:10.192098499 +0000 UTC m=+2162.274685169" Nov 24 21:43:16 crc kubenswrapper[4801]: I1124 21:43:16.082570 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qsj4k"] Nov 24 21:43:16 crc kubenswrapper[4801]: I1124 21:43:16.096938 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qsj4k"] Nov 24 21:43:16 crc kubenswrapper[4801]: I1124 21:43:16.689311 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e49661-1ce6-4040-a709-54ce907173d5" path="/var/lib/kubelet/pods/36e49661-1ce6-4040-a709-54ce907173d5/volumes" Nov 24 21:43:20 crc kubenswrapper[4801]: I1124 21:43:20.313607 4801 generic.go:334] "Generic (PLEG): container finished" podID="520f6efa-a70d-4702-83b8-0c621dfd3a8a" containerID="f9b415daefe72b8ea03f7874cf8cb0f9d0f94aa7b4cea367a5a189f141f1bf7a" exitCode=0 Nov 24 21:43:20 crc kubenswrapper[4801]: I1124 21:43:20.313757 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" event={"ID":"520f6efa-a70d-4702-83b8-0c621dfd3a8a","Type":"ContainerDied","Data":"f9b415daefe72b8ea03f7874cf8cb0f9d0f94aa7b4cea367a5a189f141f1bf7a"} Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.019487 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.136354 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-inventory\") pod \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.136830 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-ssh-key\") pod \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.137171 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nzdl\" (UniqueName: \"kubernetes.io/projected/520f6efa-a70d-4702-83b8-0c621dfd3a8a-kube-api-access-4nzdl\") pod \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\" (UID: \"520f6efa-a70d-4702-83b8-0c621dfd3a8a\") " Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.146356 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520f6efa-a70d-4702-83b8-0c621dfd3a8a-kube-api-access-4nzdl" (OuterVolumeSpecName: "kube-api-access-4nzdl") pod "520f6efa-a70d-4702-83b8-0c621dfd3a8a" (UID: "520f6efa-a70d-4702-83b8-0c621dfd3a8a"). InnerVolumeSpecName "kube-api-access-4nzdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.180027 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-inventory" (OuterVolumeSpecName: "inventory") pod "520f6efa-a70d-4702-83b8-0c621dfd3a8a" (UID: "520f6efa-a70d-4702-83b8-0c621dfd3a8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.184354 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "520f6efa-a70d-4702-83b8-0c621dfd3a8a" (UID: "520f6efa-a70d-4702-83b8-0c621dfd3a8a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.242470 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.243856 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/520f6efa-a70d-4702-83b8-0c621dfd3a8a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.244433 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nzdl\" (UniqueName: \"kubernetes.io/projected/520f6efa-a70d-4702-83b8-0c621dfd3a8a-kube-api-access-4nzdl\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.341153 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" event={"ID":"520f6efa-a70d-4702-83b8-0c621dfd3a8a","Type":"ContainerDied","Data":"d119aa59b1d7c64d078f056e6c2e22fd055140e5765c28d1d736668129fbb25b"} Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.341552 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d119aa59b1d7c64d078f056e6c2e22fd055140e5765c28d1d736668129fbb25b" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.341246 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cm7vh" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.478313 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25"] Nov 24 21:43:22 crc kubenswrapper[4801]: E1124 21:43:22.481800 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520f6efa-a70d-4702-83b8-0c621dfd3a8a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.481843 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="520f6efa-a70d-4702-83b8-0c621dfd3a8a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.482563 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="520f6efa-a70d-4702-83b8-0c621dfd3a8a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.484275 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.489501 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.489756 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.489937 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.490017 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.506919 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25"] Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.597931 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.600681 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vl4z\" (UniqueName: \"kubernetes.io/projected/c79e580c-8efa-4619-b727-4d24b3c7435f-kube-api-access-4vl4z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.600733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.706697 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.707046 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vl4z\" (UniqueName: \"kubernetes.io/projected/c79e580c-8efa-4619-b727-4d24b3c7435f-kube-api-access-4vl4z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.707161 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.715979 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.727926 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.752169 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vl4z\" (UniqueName: \"kubernetes.io/projected/c79e580c-8efa-4619-b727-4d24b3c7435f-kube-api-access-4vl4z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-56w25\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:22 crc kubenswrapper[4801]: I1124 21:43:22.805221 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:23 crc kubenswrapper[4801]: I1124 21:43:23.413619 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25"] Nov 24 21:43:24 crc kubenswrapper[4801]: I1124 21:43:24.320729 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:43:24 crc kubenswrapper[4801]: I1124 21:43:24.321335 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:43:24 crc kubenswrapper[4801]: I1124 21:43:24.369142 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" event={"ID":"c79e580c-8efa-4619-b727-4d24b3c7435f","Type":"ContainerStarted","Data":"4923c3b2500dfbca8b312d42ac782ccc65bf0504d86a0ba65d5dc6d425266fce"} Nov 24 21:43:25 crc kubenswrapper[4801]: I1124 21:43:25.386537 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" event={"ID":"c79e580c-8efa-4619-b727-4d24b3c7435f","Type":"ContainerStarted","Data":"3b2b957cddcbe2fad14251b0ebc47b380da4e10f12a6bf11732bccc3e2964d18"} Nov 24 21:43:25 crc kubenswrapper[4801]: I1124 21:43:25.422259 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" podStartSLOduration=2.760372814 podStartE2EDuration="3.422227479s" podCreationTimestamp="2025-11-24 21:43:22 +0000 UTC" firstStartedPulling="2025-11-24 21:43:23.433199806 +0000 UTC m=+2175.515786506" lastFinishedPulling="2025-11-24 21:43:24.095054461 +0000 UTC m=+2176.177641171" observedRunningTime="2025-11-24 21:43:25.415633864 +0000 UTC m=+2177.498220564" watchObservedRunningTime="2025-11-24 21:43:25.422227479 +0000 UTC m=+2177.504814179" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.211267 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9kchb"] Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.222491 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.231760 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kchb"] Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.313297 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-catalog-content\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.313556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-utilities\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.313599 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77xwx\" (UniqueName: \"kubernetes.io/projected/49d46a20-7053-4918-9f0b-953feafa00cb-kube-api-access-77xwx\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.416218 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-utilities\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.416754 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77xwx\" (UniqueName: \"kubernetes.io/projected/49d46a20-7053-4918-9f0b-953feafa00cb-kube-api-access-77xwx\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.416809 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-utilities\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.416867 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-catalog-content\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.417634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-catalog-content\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.440801 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77xwx\" (UniqueName: \"kubernetes.io/projected/49d46a20-7053-4918-9f0b-953feafa00cb-kube-api-access-77xwx\") pod \"redhat-marketplace-9kchb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:34 crc kubenswrapper[4801]: I1124 21:43:34.570124 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:35 crc kubenswrapper[4801]: I1124 21:43:35.158838 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kchb"] Nov 24 21:43:35 crc kubenswrapper[4801]: W1124 21:43:35.161261 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49d46a20_7053_4918_9f0b_953feafa00cb.slice/crio-580ab804bb5ff1a926b33c21a093ed2de1efba057569c8f700ce8c823411db97 WatchSource:0}: Error finding container 580ab804bb5ff1a926b33c21a093ed2de1efba057569c8f700ce8c823411db97: Status 404 returned error can't find the container with id 580ab804bb5ff1a926b33c21a093ed2de1efba057569c8f700ce8c823411db97 Nov 24 21:43:35 crc kubenswrapper[4801]: I1124 21:43:35.524745 4801 generic.go:334] "Generic (PLEG): container finished" podID="c79e580c-8efa-4619-b727-4d24b3c7435f" containerID="3b2b957cddcbe2fad14251b0ebc47b380da4e10f12a6bf11732bccc3e2964d18" exitCode=0 Nov 24 21:43:35 crc kubenswrapper[4801]: I1124 21:43:35.524855 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" event={"ID":"c79e580c-8efa-4619-b727-4d24b3c7435f","Type":"ContainerDied","Data":"3b2b957cddcbe2fad14251b0ebc47b380da4e10f12a6bf11732bccc3e2964d18"} Nov 24 21:43:35 crc kubenswrapper[4801]: I1124 21:43:35.528470 4801 generic.go:334] "Generic (PLEG): container finished" podID="49d46a20-7053-4918-9f0b-953feafa00cb" containerID="db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db" exitCode=0 Nov 24 21:43:35 crc kubenswrapper[4801]: I1124 21:43:35.528513 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerDied","Data":"db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db"} Nov 24 21:43:35 crc kubenswrapper[4801]: I1124 21:43:35.528534 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerStarted","Data":"580ab804bb5ff1a926b33c21a093ed2de1efba057569c8f700ce8c823411db97"} Nov 24 21:43:36 crc kubenswrapper[4801]: I1124 21:43:36.546916 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerStarted","Data":"75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62"} Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.577233 4801 generic.go:334] "Generic (PLEG): container finished" podID="49d46a20-7053-4918-9f0b-953feafa00cb" containerID="75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62" exitCode=0 Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.577456 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerDied","Data":"75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62"} Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.582254 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" event={"ID":"c79e580c-8efa-4619-b727-4d24b3c7435f","Type":"ContainerDied","Data":"4923c3b2500dfbca8b312d42ac782ccc65bf0504d86a0ba65d5dc6d425266fce"} Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.582312 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4923c3b2500dfbca8b312d42ac782ccc65bf0504d86a0ba65d5dc6d425266fce" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.712486 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.864092 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-ssh-key\") pod \"c79e580c-8efa-4619-b727-4d24b3c7435f\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.864192 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vl4z\" (UniqueName: \"kubernetes.io/projected/c79e580c-8efa-4619-b727-4d24b3c7435f-kube-api-access-4vl4z\") pod \"c79e580c-8efa-4619-b727-4d24b3c7435f\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.864310 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-inventory\") pod \"c79e580c-8efa-4619-b727-4d24b3c7435f\" (UID: \"c79e580c-8efa-4619-b727-4d24b3c7435f\") " Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.871240 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79e580c-8efa-4619-b727-4d24b3c7435f-kube-api-access-4vl4z" (OuterVolumeSpecName: "kube-api-access-4vl4z") pod "c79e580c-8efa-4619-b727-4d24b3c7435f" (UID: "c79e580c-8efa-4619-b727-4d24b3c7435f"). InnerVolumeSpecName "kube-api-access-4vl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.930656 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c79e580c-8efa-4619-b727-4d24b3c7435f" (UID: "c79e580c-8efa-4619-b727-4d24b3c7435f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.933509 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-inventory" (OuterVolumeSpecName: "inventory") pod "c79e580c-8efa-4619-b727-4d24b3c7435f" (UID: "c79e580c-8efa-4619-b727-4d24b3c7435f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.967866 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.967913 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vl4z\" (UniqueName: \"kubernetes.io/projected/c79e580c-8efa-4619-b727-4d24b3c7435f-kube-api-access-4vl4z\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:37 crc kubenswrapper[4801]: I1124 21:43:37.967933 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c79e580c-8efa-4619-b727-4d24b3c7435f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.598793 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-56w25" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.600670 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerStarted","Data":"26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13"} Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.625173 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9kchb" podStartSLOduration=2.141783084 podStartE2EDuration="4.625149897s" podCreationTimestamp="2025-11-24 21:43:34 +0000 UTC" firstStartedPulling="2025-11-24 21:43:35.530893496 +0000 UTC m=+2187.613480186" lastFinishedPulling="2025-11-24 21:43:38.014260329 +0000 UTC m=+2190.096846999" observedRunningTime="2025-11-24 21:43:38.620862394 +0000 UTC m=+2190.703449074" watchObservedRunningTime="2025-11-24 21:43:38.625149897 +0000 UTC m=+2190.707736567" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.833335 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k"] Nov 24 21:43:38 crc kubenswrapper[4801]: E1124 21:43:38.833937 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79e580c-8efa-4619-b727-4d24b3c7435f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.833952 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79e580c-8efa-4619-b727-4d24b3c7435f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.834291 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79e580c-8efa-4619-b727-4d24b3c7435f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.842882 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.850576 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.850803 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.850912 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.851075 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.851206 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.851321 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.851466 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.852132 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.852261 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 24 21:43:38 crc kubenswrapper[4801]: I1124 21:43:38.855964 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k"] Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.002128 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.002422 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.002675 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.002933 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65kwl\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-kube-api-access-65kwl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.003042 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.003263 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.003419 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.003510 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.003896 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.003993 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.004045 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.004150 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.004296 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.004428 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.004592 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.004644 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.108032 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.108164 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.108216 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.110135 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.110917 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111042 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65kwl\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-kube-api-access-65kwl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111276 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111455 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111550 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111598 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111810 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111881 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.111999 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.112113 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.118314 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.118354 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.118799 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.119577 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.120251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.120747 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.121716 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.121825 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.121908 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.122424 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.124857 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.125515 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.125816 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.132892 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.133419 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65kwl\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-kube-api-access-65kwl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.136660 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5726k\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:39 crc kubenswrapper[4801]: I1124 21:43:39.194105 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:43:40 crc kubenswrapper[4801]: I1124 21:43:40.016818 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k"] Nov 24 21:43:40 crc kubenswrapper[4801]: I1124 21:43:40.644781 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" event={"ID":"a7df1790-5911-4056-b880-6140a93203b7","Type":"ContainerStarted","Data":"eda24da9b0afb7f4cbf18ebd78b92dc569ac660fe5ce84751f7f287d1f7a8a21"} Nov 24 21:43:41 crc kubenswrapper[4801]: I1124 21:43:41.662998 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" event={"ID":"a7df1790-5911-4056-b880-6140a93203b7","Type":"ContainerStarted","Data":"e086feb1d2c97475bb543080fa43c82cde24ac33ae6861846eb427ee791cf3d5"} Nov 24 21:43:41 crc kubenswrapper[4801]: I1124 21:43:41.715572 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" podStartSLOduration=3.027412245 podStartE2EDuration="3.715536268s" podCreationTimestamp="2025-11-24 21:43:38 +0000 UTC" firstStartedPulling="2025-11-24 21:43:40.022630214 +0000 UTC m=+2192.105216894" lastFinishedPulling="2025-11-24 21:43:40.710754247 +0000 UTC m=+2192.793340917" observedRunningTime="2025-11-24 21:43:41.701915414 +0000 UTC m=+2193.784502094" watchObservedRunningTime="2025-11-24 21:43:41.715536268 +0000 UTC m=+2193.798122948" Nov 24 21:43:44 crc kubenswrapper[4801]: I1124 21:43:44.571408 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:44 crc kubenswrapper[4801]: I1124 21:43:44.572154 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:44 crc kubenswrapper[4801]: I1124 21:43:44.644308 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:44 crc kubenswrapper[4801]: I1124 21:43:44.762761 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:44 crc kubenswrapper[4801]: I1124 21:43:44.901969 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kchb"] Nov 24 21:43:46 crc kubenswrapper[4801]: I1124 21:43:46.756629 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9kchb" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="registry-server" containerID="cri-o://26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13" gracePeriod=2 Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.358001 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.414625 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-utilities\") pod \"49d46a20-7053-4918-9f0b-953feafa00cb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.414924 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77xwx\" (UniqueName: \"kubernetes.io/projected/49d46a20-7053-4918-9f0b-953feafa00cb-kube-api-access-77xwx\") pod \"49d46a20-7053-4918-9f0b-953feafa00cb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.415036 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-catalog-content\") pod \"49d46a20-7053-4918-9f0b-953feafa00cb\" (UID: \"49d46a20-7053-4918-9f0b-953feafa00cb\") " Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.415879 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-utilities" (OuterVolumeSpecName: "utilities") pod "49d46a20-7053-4918-9f0b-953feafa00cb" (UID: "49d46a20-7053-4918-9f0b-953feafa00cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.436031 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49d46a20-7053-4918-9f0b-953feafa00cb" (UID: "49d46a20-7053-4918-9f0b-953feafa00cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.438449 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d46a20-7053-4918-9f0b-953feafa00cb-kube-api-access-77xwx" (OuterVolumeSpecName: "kube-api-access-77xwx") pod "49d46a20-7053-4918-9f0b-953feafa00cb" (UID: "49d46a20-7053-4918-9f0b-953feafa00cb"). InnerVolumeSpecName "kube-api-access-77xwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.519529 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77xwx\" (UniqueName: \"kubernetes.io/projected/49d46a20-7053-4918-9f0b-953feafa00cb-kube-api-access-77xwx\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.519582 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.519596 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d46a20-7053-4918-9f0b-953feafa00cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.772244 4801 generic.go:334] "Generic (PLEG): container finished" podID="49d46a20-7053-4918-9f0b-953feafa00cb" containerID="26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13" exitCode=0 Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.772341 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerDied","Data":"26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13"} Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.772458 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kchb" event={"ID":"49d46a20-7053-4918-9f0b-953feafa00cb","Type":"ContainerDied","Data":"580ab804bb5ff1a926b33c21a093ed2de1efba057569c8f700ce8c823411db97"} Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.772500 4801 scope.go:117] "RemoveContainer" containerID="26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.772745 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kchb" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.839967 4801 scope.go:117] "RemoveContainer" containerID="75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.843154 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kchb"] Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.873264 4801 scope.go:117] "RemoveContainer" containerID="db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.881260 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kchb"] Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.957295 4801 scope.go:117] "RemoveContainer" containerID="26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13" Nov 24 21:43:47 crc kubenswrapper[4801]: E1124 21:43:47.958023 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13\": container with ID starting with 26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13 not found: ID does not exist" containerID="26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.958077 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13"} err="failed to get container status \"26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13\": rpc error: code = NotFound desc = could not find container \"26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13\": container with ID starting with 26b279b1e2eacf87e0e3d77ae9a81bcd9d59b7bf90d8aaf22ac43c7072874a13 not found: ID does not exist" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.958117 4801 scope.go:117] "RemoveContainer" containerID="75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62" Nov 24 21:43:47 crc kubenswrapper[4801]: E1124 21:43:47.958594 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62\": container with ID starting with 75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62 not found: ID does not exist" containerID="75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.958640 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62"} err="failed to get container status \"75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62\": rpc error: code = NotFound desc = could not find container \"75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62\": container with ID starting with 75fdeffffe1aec967a024649365ba334e099983c1115e61e2cf89e5ea8a06f62 not found: ID does not exist" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.958667 4801 scope.go:117] "RemoveContainer" containerID="db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db" Nov 24 21:43:47 crc kubenswrapper[4801]: E1124 21:43:47.959112 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db\": container with ID starting with db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db not found: ID does not exist" containerID="db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.959142 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db"} err="failed to get container status \"db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db\": rpc error: code = NotFound desc = could not find container \"db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db\": container with ID starting with db46a68370602fed4ce09eee21f45965a778c69cb8f0a8b8875350892f8e90db not found: ID does not exist" Nov 24 21:43:47 crc kubenswrapper[4801]: I1124 21:43:47.961764 4801 scope.go:117] "RemoveContainer" containerID="1d3a6bc4ad0f5eafb2ac66de5169de7fe6964a1d88aaf9dc37f274b1db661850" Nov 24 21:43:48 crc kubenswrapper[4801]: I1124 21:43:48.709618 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" path="/var/lib/kubelet/pods/49d46a20-7053-4918-9f0b-953feafa00cb/volumes" Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.320356 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.321151 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.321214 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.322459 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.322521 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" gracePeriod=600 Nov 24 21:43:54 crc kubenswrapper[4801]: E1124 21:43:54.473322 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.884070 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" exitCode=0 Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.884141 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88"} Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.884231 4801 scope.go:117] "RemoveContainer" containerID="e95851a7e4adfaa39a5f53fbd943a83874eab98df94481a53956c3ef5883efd6" Nov 24 21:43:54 crc kubenswrapper[4801]: I1124 21:43:54.885605 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:43:54 crc kubenswrapper[4801]: E1124 21:43:54.886341 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:43:56 crc kubenswrapper[4801]: I1124 21:43:56.066082 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-8bbvs"] Nov 24 21:43:56 crc kubenswrapper[4801]: I1124 21:43:56.077350 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-8bbvs"] Nov 24 21:43:56 crc kubenswrapper[4801]: I1124 21:43:56.688744 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041806f3-13fb-43cc-98c2-180c16b5e3ea" path="/var/lib/kubelet/pods/041806f3-13fb-43cc-98c2-180c16b5e3ea/volumes" Nov 24 21:44:06 crc kubenswrapper[4801]: I1124 21:44:06.665540 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:44:06 crc kubenswrapper[4801]: E1124 21:44:06.666991 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:44:19 crc kubenswrapper[4801]: I1124 21:44:19.664986 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:44:19 crc kubenswrapper[4801]: E1124 21:44:19.666681 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:44:33 crc kubenswrapper[4801]: I1124 21:44:33.664883 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:44:33 crc kubenswrapper[4801]: E1124 21:44:33.666192 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:44:34 crc kubenswrapper[4801]: I1124 21:44:34.474018 4801 generic.go:334] "Generic (PLEG): container finished" podID="a7df1790-5911-4056-b880-6140a93203b7" containerID="e086feb1d2c97475bb543080fa43c82cde24ac33ae6861846eb427ee791cf3d5" exitCode=0 Nov 24 21:44:34 crc kubenswrapper[4801]: I1124 21:44:34.474194 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" event={"ID":"a7df1790-5911-4056-b880-6140a93203b7","Type":"ContainerDied","Data":"e086feb1d2c97475bb543080fa43c82cde24ac33ae6861846eb427ee791cf3d5"} Nov 24 21:44:35 crc kubenswrapper[4801]: I1124 21:44:35.997698 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.058805 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-inventory\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.058895 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.058968 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-bootstrap-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059094 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-power-monitoring-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059165 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059360 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ssh-key\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059532 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-neutron-metadata-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059623 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ovn-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059701 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-nova-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059741 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059778 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059872 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.059991 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.060070 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65kwl\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-kube-api-access-65kwl\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.060204 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-repo-setup-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.060251 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-libvirt-combined-ca-bundle\") pod \"a7df1790-5911-4056-b880-6140a93203b7\" (UID: \"a7df1790-5911-4056-b880-6140a93203b7\") " Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.069705 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.069744 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.071094 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.076722 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.081118 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.083458 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.085001 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.086135 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.087714 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-kube-api-access-65kwl" (OuterVolumeSpecName: "kube-api-access-65kwl") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "kube-api-access-65kwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.087841 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.088959 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.089100 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.092070 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.098012 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.134371 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.152707 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-inventory" (OuterVolumeSpecName: "inventory") pod "a7df1790-5911-4056-b880-6140a93203b7" (UID: "a7df1790-5911-4056-b880-6140a93203b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164182 4801 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164268 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164286 4801 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164328 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164344 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164361 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164405 4801 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164419 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65kwl\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-kube-api-access-65kwl\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164432 4801 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164444 4801 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164488 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164501 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164514 4801 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164531 4801 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164585 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7df1790-5911-4056-b880-6140a93203b7-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.164598 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7df1790-5911-4056-b880-6140a93203b7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.498081 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" event={"ID":"a7df1790-5911-4056-b880-6140a93203b7","Type":"ContainerDied","Data":"eda24da9b0afb7f4cbf18ebd78b92dc569ac660fe5ce84751f7f287d1f7a8a21"} Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.498134 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda24da9b0afb7f4cbf18ebd78b92dc569ac660fe5ce84751f7f287d1f7a8a21" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.498143 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5726k" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.627289 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7"] Nov 24 21:44:36 crc kubenswrapper[4801]: E1124 21:44:36.627854 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="registry-server" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.627873 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="registry-server" Nov 24 21:44:36 crc kubenswrapper[4801]: E1124 21:44:36.627888 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="extract-utilities" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.627895 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="extract-utilities" Nov 24 21:44:36 crc kubenswrapper[4801]: E1124 21:44:36.627909 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7df1790-5911-4056-b880-6140a93203b7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.627919 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7df1790-5911-4056-b880-6140a93203b7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 21:44:36 crc kubenswrapper[4801]: E1124 21:44:36.627934 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="extract-content" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.627939 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="extract-content" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.628212 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d46a20-7053-4918-9f0b-953feafa00cb" containerName="registry-server" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.628231 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7df1790-5911-4056-b880-6140a93203b7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.629156 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.631859 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.632092 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.632320 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.632525 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.633530 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.649798 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7"] Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.681574 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.681782 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfz29\" (UniqueName: \"kubernetes.io/projected/bf1804da-480b-4cee-8b8e-e25ef5a6e119-kube-api-access-mfz29\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.681894 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.681928 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.682064 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.784595 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.784882 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.784984 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfz29\" (UniqueName: \"kubernetes.io/projected/bf1804da-480b-4cee-8b8e-e25ef5a6e119-kube-api-access-mfz29\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.785062 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.785093 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.786214 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.788291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.788434 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.795844 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.804218 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfz29\" (UniqueName: \"kubernetes.io/projected/bf1804da-480b-4cee-8b8e-e25ef5a6e119-kube-api-access-mfz29\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-p9dw7\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:36 crc kubenswrapper[4801]: I1124 21:44:36.950343 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:44:37 crc kubenswrapper[4801]: W1124 21:44:37.561802 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf1804da_480b_4cee_8b8e_e25ef5a6e119.slice/crio-e0753a267863b676e305a3ef1a042130b15e2348d706d3a63bec00b140ddfe23 WatchSource:0}: Error finding container e0753a267863b676e305a3ef1a042130b15e2348d706d3a63bec00b140ddfe23: Status 404 returned error can't find the container with id e0753a267863b676e305a3ef1a042130b15e2348d706d3a63bec00b140ddfe23 Nov 24 21:44:37 crc kubenswrapper[4801]: I1124 21:44:37.564013 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7"] Nov 24 21:44:38 crc kubenswrapper[4801]: I1124 21:44:38.528654 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" event={"ID":"bf1804da-480b-4cee-8b8e-e25ef5a6e119","Type":"ContainerStarted","Data":"738875f7ab30e6afee4121f75563212a3fe1b37fcca0fda3dd25d8293a6d86e7"} Nov 24 21:44:38 crc kubenswrapper[4801]: I1124 21:44:38.529415 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" event={"ID":"bf1804da-480b-4cee-8b8e-e25ef5a6e119","Type":"ContainerStarted","Data":"e0753a267863b676e305a3ef1a042130b15e2348d706d3a63bec00b140ddfe23"} Nov 24 21:44:38 crc kubenswrapper[4801]: I1124 21:44:38.563898 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" podStartSLOduration=2.103336216 podStartE2EDuration="2.563870954s" podCreationTimestamp="2025-11-24 21:44:36 +0000 UTC" firstStartedPulling="2025-11-24 21:44:37.564354227 +0000 UTC m=+2249.646940897" lastFinishedPulling="2025-11-24 21:44:38.024888965 +0000 UTC m=+2250.107475635" observedRunningTime="2025-11-24 21:44:38.551101387 +0000 UTC m=+2250.633688067" watchObservedRunningTime="2025-11-24 21:44:38.563870954 +0000 UTC m=+2250.646457634" Nov 24 21:44:45 crc kubenswrapper[4801]: I1124 21:44:45.664437 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:44:45 crc kubenswrapper[4801]: E1124 21:44:45.665516 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:44:48 crc kubenswrapper[4801]: I1124 21:44:48.092455 4801 scope.go:117] "RemoveContainer" containerID="82cfbdb1027b6b034bc5287e84f27a347e2bbb3596b2f70aea1740f4ca353283" Nov 24 21:44:56 crc kubenswrapper[4801]: I1124 21:44:56.119739 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77d858bcc9-bwkzb" podUID="44f65da7-819a-43dd-9267-2b30cffff0f2" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 21:44:57 crc kubenswrapper[4801]: I1124 21:44:57.664805 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:44:57 crc kubenswrapper[4801]: E1124 21:44:57.665915 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.188574 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9"] Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.192170 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.194741 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.195114 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.209301 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9"] Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.298084 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc6fs\" (UniqueName: \"kubernetes.io/projected/dbd0c6ae-984a-45e2-a334-bc22a349db84-kube-api-access-qc6fs\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.298602 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbd0c6ae-984a-45e2-a334-bc22a349db84-secret-volume\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.298895 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd0c6ae-984a-45e2-a334-bc22a349db84-config-volume\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.403862 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbd0c6ae-984a-45e2-a334-bc22a349db84-secret-volume\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.404056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd0c6ae-984a-45e2-a334-bc22a349db84-config-volume\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.404486 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc6fs\" (UniqueName: \"kubernetes.io/projected/dbd0c6ae-984a-45e2-a334-bc22a349db84-kube-api-access-qc6fs\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.405111 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd0c6ae-984a-45e2-a334-bc22a349db84-config-volume\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.418174 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbd0c6ae-984a-45e2-a334-bc22a349db84-secret-volume\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.424220 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc6fs\" (UniqueName: \"kubernetes.io/projected/dbd0c6ae-984a-45e2-a334-bc22a349db84-kube-api-access-qc6fs\") pod \"collect-profiles-29400345-fc9l9\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:00 crc kubenswrapper[4801]: I1124 21:45:00.553904 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:01 crc kubenswrapper[4801]: I1124 21:45:01.132577 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9"] Nov 24 21:45:01 crc kubenswrapper[4801]: I1124 21:45:01.926579 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" event={"ID":"dbd0c6ae-984a-45e2-a334-bc22a349db84","Type":"ContainerStarted","Data":"dc22f23ab780848ba0f10694a28bed06707e7863da9529967a9febcf1463b4ea"} Nov 24 21:45:01 crc kubenswrapper[4801]: I1124 21:45:01.927142 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" event={"ID":"dbd0c6ae-984a-45e2-a334-bc22a349db84","Type":"ContainerStarted","Data":"39f9562e26c91028cecb5668f43d776e68d157dd5990950ccc319215b9f80f1e"} Nov 24 21:45:01 crc kubenswrapper[4801]: I1124 21:45:01.960794 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" podStartSLOduration=1.960771517 podStartE2EDuration="1.960771517s" podCreationTimestamp="2025-11-24 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:01.9528572 +0000 UTC m=+2274.035443870" watchObservedRunningTime="2025-11-24 21:45:01.960771517 +0000 UTC m=+2274.043358187" Nov 24 21:45:02 crc kubenswrapper[4801]: I1124 21:45:02.949117 4801 generic.go:334] "Generic (PLEG): container finished" podID="dbd0c6ae-984a-45e2-a334-bc22a349db84" containerID="dc22f23ab780848ba0f10694a28bed06707e7863da9529967a9febcf1463b4ea" exitCode=0 Nov 24 21:45:02 crc kubenswrapper[4801]: I1124 21:45:02.949185 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" event={"ID":"dbd0c6ae-984a-45e2-a334-bc22a349db84","Type":"ContainerDied","Data":"dc22f23ab780848ba0f10694a28bed06707e7863da9529967a9febcf1463b4ea"} Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.433850 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.535494 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbd0c6ae-984a-45e2-a334-bc22a349db84-secret-volume\") pod \"dbd0c6ae-984a-45e2-a334-bc22a349db84\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.535535 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd0c6ae-984a-45e2-a334-bc22a349db84-config-volume\") pod \"dbd0c6ae-984a-45e2-a334-bc22a349db84\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.536044 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc6fs\" (UniqueName: \"kubernetes.io/projected/dbd0c6ae-984a-45e2-a334-bc22a349db84-kube-api-access-qc6fs\") pod \"dbd0c6ae-984a-45e2-a334-bc22a349db84\" (UID: \"dbd0c6ae-984a-45e2-a334-bc22a349db84\") " Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.536852 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd0c6ae-984a-45e2-a334-bc22a349db84-config-volume" (OuterVolumeSpecName: "config-volume") pod "dbd0c6ae-984a-45e2-a334-bc22a349db84" (UID: "dbd0c6ae-984a-45e2-a334-bc22a349db84"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.545822 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd0c6ae-984a-45e2-a334-bc22a349db84-kube-api-access-qc6fs" (OuterVolumeSpecName: "kube-api-access-qc6fs") pod "dbd0c6ae-984a-45e2-a334-bc22a349db84" (UID: "dbd0c6ae-984a-45e2-a334-bc22a349db84"). InnerVolumeSpecName "kube-api-access-qc6fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.546540 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd0c6ae-984a-45e2-a334-bc22a349db84-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dbd0c6ae-984a-45e2-a334-bc22a349db84" (UID: "dbd0c6ae-984a-45e2-a334-bc22a349db84"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.639948 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc6fs\" (UniqueName: \"kubernetes.io/projected/dbd0c6ae-984a-45e2-a334-bc22a349db84-kube-api-access-qc6fs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.640265 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbd0c6ae-984a-45e2-a334-bc22a349db84-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.640340 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd0c6ae-984a-45e2-a334-bc22a349db84-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.975952 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" event={"ID":"dbd0c6ae-984a-45e2-a334-bc22a349db84","Type":"ContainerDied","Data":"39f9562e26c91028cecb5668f43d776e68d157dd5990950ccc319215b9f80f1e"} Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.976387 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39f9562e26c91028cecb5668f43d776e68d157dd5990950ccc319215b9f80f1e" Nov 24 21:45:04 crc kubenswrapper[4801]: I1124 21:45:04.976090 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9" Nov 24 21:45:05 crc kubenswrapper[4801]: I1124 21:45:05.047889 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m"] Nov 24 21:45:05 crc kubenswrapper[4801]: I1124 21:45:05.061618 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-wbg7m"] Nov 24 21:45:06 crc kubenswrapper[4801]: I1124 21:45:06.694873 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8c2382-6e5d-49cb-9028-43b797a70879" path="/var/lib/kubelet/pods/ac8c2382-6e5d-49cb-9028-43b797a70879/volumes" Nov 24 21:45:12 crc kubenswrapper[4801]: I1124 21:45:12.665192 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:45:12 crc kubenswrapper[4801]: E1124 21:45:12.666938 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:45:26 crc kubenswrapper[4801]: I1124 21:45:26.664698 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:45:26 crc kubenswrapper[4801]: E1124 21:45:26.665987 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:45:37 crc kubenswrapper[4801]: I1124 21:45:37.665214 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:45:37 crc kubenswrapper[4801]: E1124 21:45:37.669994 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:45:48 crc kubenswrapper[4801]: I1124 21:45:48.212109 4801 scope.go:117] "RemoveContainer" containerID="bce23eb994a8eaf6aef36947cba09e5e22f74471367ad73df05113d1b6c0c7db" Nov 24 21:45:49 crc kubenswrapper[4801]: I1124 21:45:49.664289 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:45:49 crc kubenswrapper[4801]: E1124 21:45:49.665657 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:45:52 crc kubenswrapper[4801]: I1124 21:45:52.724395 4801 generic.go:334] "Generic (PLEG): container finished" podID="bf1804da-480b-4cee-8b8e-e25ef5a6e119" containerID="738875f7ab30e6afee4121f75563212a3fe1b37fcca0fda3dd25d8293a6d86e7" exitCode=0 Nov 24 21:45:52 crc kubenswrapper[4801]: I1124 21:45:52.724494 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" event={"ID":"bf1804da-480b-4cee-8b8e-e25ef5a6e119","Type":"ContainerDied","Data":"738875f7ab30e6afee4121f75563212a3fe1b37fcca0fda3dd25d8293a6d86e7"} Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.324928 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.461128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ssh-key\") pod \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.461485 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovncontroller-config-0\") pod \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.461564 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-inventory\") pod \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.461968 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfz29\" (UniqueName: \"kubernetes.io/projected/bf1804da-480b-4cee-8b8e-e25ef5a6e119-kube-api-access-mfz29\") pod \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.462042 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovn-combined-ca-bundle\") pod \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\" (UID: \"bf1804da-480b-4cee-8b8e-e25ef5a6e119\") " Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.467359 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1804da-480b-4cee-8b8e-e25ef5a6e119-kube-api-access-mfz29" (OuterVolumeSpecName: "kube-api-access-mfz29") pod "bf1804da-480b-4cee-8b8e-e25ef5a6e119" (UID: "bf1804da-480b-4cee-8b8e-e25ef5a6e119"). InnerVolumeSpecName "kube-api-access-mfz29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.472774 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bf1804da-480b-4cee-8b8e-e25ef5a6e119" (UID: "bf1804da-480b-4cee-8b8e-e25ef5a6e119"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.505986 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-inventory" (OuterVolumeSpecName: "inventory") pod "bf1804da-480b-4cee-8b8e-e25ef5a6e119" (UID: "bf1804da-480b-4cee-8b8e-e25ef5a6e119"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.509144 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf1804da-480b-4cee-8b8e-e25ef5a6e119" (UID: "bf1804da-480b-4cee-8b8e-e25ef5a6e119"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.523721 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bf1804da-480b-4cee-8b8e-e25ef5a6e119" (UID: "bf1804da-480b-4cee-8b8e-e25ef5a6e119"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.565653 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.565686 4801 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.565697 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.565706 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfz29\" (UniqueName: \"kubernetes.io/projected/bf1804da-480b-4cee-8b8e-e25ef5a6e119-kube-api-access-mfz29\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.565715 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1804da-480b-4cee-8b8e-e25ef5a6e119-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.765026 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" event={"ID":"bf1804da-480b-4cee-8b8e-e25ef5a6e119","Type":"ContainerDied","Data":"e0753a267863b676e305a3ef1a042130b15e2348d706d3a63bec00b140ddfe23"} Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.765083 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0753a267863b676e305a3ef1a042130b15e2348d706d3a63bec00b140ddfe23" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.765135 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-p9dw7" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.893921 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt"] Nov 24 21:45:54 crc kubenswrapper[4801]: E1124 21:45:54.894670 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd0c6ae-984a-45e2-a334-bc22a349db84" containerName="collect-profiles" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.894701 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd0c6ae-984a-45e2-a334-bc22a349db84" containerName="collect-profiles" Nov 24 21:45:54 crc kubenswrapper[4801]: E1124 21:45:54.894815 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1804da-480b-4cee-8b8e-e25ef5a6e119" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.894840 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1804da-480b-4cee-8b8e-e25ef5a6e119" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.895223 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1804da-480b-4cee-8b8e-e25ef5a6e119" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.895311 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd0c6ae-984a-45e2-a334-bc22a349db84" containerName="collect-profiles" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.896877 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.913606 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.913663 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.914073 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.914140 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.914314 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.914429 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 21:45:54 crc kubenswrapper[4801]: I1124 21:45:54.916429 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt"] Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.082403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.083031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.083098 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.083161 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.083221 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wbx\" (UniqueName: \"kubernetes.io/projected/92764f41-c2a3-479e-a671-81039586b065-kube-api-access-c9wbx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.084053 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.188114 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.188257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.188497 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.188573 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.188636 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.188702 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wbx\" (UniqueName: \"kubernetes.io/projected/92764f41-c2a3-479e-a671-81039586b065-kube-api-access-c9wbx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.193669 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.195475 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.200965 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.202493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.209270 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.224203 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wbx\" (UniqueName: \"kubernetes.io/projected/92764f41-c2a3-479e-a671-81039586b065-kube-api-access-c9wbx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.226244 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:45:55 crc kubenswrapper[4801]: I1124 21:45:55.923448 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt"] Nov 24 21:45:56 crc kubenswrapper[4801]: I1124 21:45:56.826469 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" event={"ID":"92764f41-c2a3-479e-a671-81039586b065","Type":"ContainerStarted","Data":"e4ba7a2a26fe8eb1d4a42f5d58990de0362df3b7fb9a5e7df179ec9e4273fa12"} Nov 24 21:45:57 crc kubenswrapper[4801]: I1124 21:45:57.842049 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" event={"ID":"92764f41-c2a3-479e-a671-81039586b065","Type":"ContainerStarted","Data":"42877b46dbfe76520da09ff47f5df3332e61adfc43f0eefeaf0b47a56064ad72"} Nov 24 21:45:57 crc kubenswrapper[4801]: I1124 21:45:57.874126 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" podStartSLOduration=3.249720944 podStartE2EDuration="3.874097462s" podCreationTimestamp="2025-11-24 21:45:54 +0000 UTC" firstStartedPulling="2025-11-24 21:45:55.935884451 +0000 UTC m=+2328.018471121" lastFinishedPulling="2025-11-24 21:45:56.560260969 +0000 UTC m=+2328.642847639" observedRunningTime="2025-11-24 21:45:57.861568572 +0000 UTC m=+2329.944155262" watchObservedRunningTime="2025-11-24 21:45:57.874097462 +0000 UTC m=+2329.956684142" Nov 24 21:46:04 crc kubenswrapper[4801]: I1124 21:46:04.664935 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:46:04 crc kubenswrapper[4801]: E1124 21:46:04.666295 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:46:15 crc kubenswrapper[4801]: I1124 21:46:15.664757 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:46:15 crc kubenswrapper[4801]: E1124 21:46:15.666602 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:46:27 crc kubenswrapper[4801]: I1124 21:46:27.665022 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:46:27 crc kubenswrapper[4801]: E1124 21:46:27.666040 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:46:41 crc kubenswrapper[4801]: I1124 21:46:41.665527 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:46:41 crc kubenswrapper[4801]: E1124 21:46:41.668757 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:46:53 crc kubenswrapper[4801]: I1124 21:46:53.720200 4801 generic.go:334] "Generic (PLEG): container finished" podID="92764f41-c2a3-479e-a671-81039586b065" containerID="42877b46dbfe76520da09ff47f5df3332e61adfc43f0eefeaf0b47a56064ad72" exitCode=0 Nov 24 21:46:53 crc kubenswrapper[4801]: I1124 21:46:53.720283 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" event={"ID":"92764f41-c2a3-479e-a671-81039586b065","Type":"ContainerDied","Data":"42877b46dbfe76520da09ff47f5df3332e61adfc43f0eefeaf0b47a56064ad72"} Nov 24 21:46:54 crc kubenswrapper[4801]: I1124 21:46:54.665252 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:46:54 crc kubenswrapper[4801]: E1124 21:46:54.666055 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.518676 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.631425 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-metadata-combined-ca-bundle\") pod \"92764f41-c2a3-479e-a671-81039586b065\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.631541 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-ssh-key\") pod \"92764f41-c2a3-479e-a671-81039586b065\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.631603 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-inventory\") pod \"92764f41-c2a3-479e-a671-81039586b065\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.631806 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wbx\" (UniqueName: \"kubernetes.io/projected/92764f41-c2a3-479e-a671-81039586b065-kube-api-access-c9wbx\") pod \"92764f41-c2a3-479e-a671-81039586b065\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.631940 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-ovn-metadata-agent-neutron-config-0\") pod \"92764f41-c2a3-479e-a671-81039586b065\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.632028 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-nova-metadata-neutron-config-0\") pod \"92764f41-c2a3-479e-a671-81039586b065\" (UID: \"92764f41-c2a3-479e-a671-81039586b065\") " Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.645956 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "92764f41-c2a3-479e-a671-81039586b065" (UID: "92764f41-c2a3-479e-a671-81039586b065"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.653621 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92764f41-c2a3-479e-a671-81039586b065-kube-api-access-c9wbx" (OuterVolumeSpecName: "kube-api-access-c9wbx") pod "92764f41-c2a3-479e-a671-81039586b065" (UID: "92764f41-c2a3-479e-a671-81039586b065"). InnerVolumeSpecName "kube-api-access-c9wbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.678056 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "92764f41-c2a3-479e-a671-81039586b065" (UID: "92764f41-c2a3-479e-a671-81039586b065"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.680865 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "92764f41-c2a3-479e-a671-81039586b065" (UID: "92764f41-c2a3-479e-a671-81039586b065"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.681069 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-inventory" (OuterVolumeSpecName: "inventory") pod "92764f41-c2a3-479e-a671-81039586b065" (UID: "92764f41-c2a3-479e-a671-81039586b065"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.700506 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "92764f41-c2a3-479e-a671-81039586b065" (UID: "92764f41-c2a3-479e-a671-81039586b065"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.737142 4801 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.737194 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.737710 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.737743 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wbx\" (UniqueName: \"kubernetes.io/projected/92764f41-c2a3-479e-a671-81039586b065-kube-api-access-c9wbx\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.737777 4801 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.737808 4801 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92764f41-c2a3-479e-a671-81039586b065-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.753617 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" event={"ID":"92764f41-c2a3-479e-a671-81039586b065","Type":"ContainerDied","Data":"e4ba7a2a26fe8eb1d4a42f5d58990de0362df3b7fb9a5e7df179ec9e4273fa12"} Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.753688 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.753693 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ba7a2a26fe8eb1d4a42f5d58990de0362df3b7fb9a5e7df179ec9e4273fa12" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.876455 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf"] Nov 24 21:46:55 crc kubenswrapper[4801]: E1124 21:46:55.877488 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92764f41-c2a3-479e-a671-81039586b065" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.877518 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="92764f41-c2a3-479e-a671-81039586b065" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.878020 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="92764f41-c2a3-479e-a671-81039586b065" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.879564 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.882711 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.883011 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.883251 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.884944 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.885141 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:46:55 crc kubenswrapper[4801]: I1124 21:46:55.890804 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf"] Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.050044 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.050556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxb4\" (UniqueName: \"kubernetes.io/projected/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-kube-api-access-knxb4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.050720 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.050794 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.050909 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.153798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxb4\" (UniqueName: \"kubernetes.io/projected/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-kube-api-access-knxb4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.153974 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.154048 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.154117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.154287 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.160046 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.166841 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.168469 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.172067 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxb4\" (UniqueName: \"kubernetes.io/projected/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-kube-api-access-knxb4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.174287 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.217686 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:46:56 crc kubenswrapper[4801]: I1124 21:46:56.917249 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf"] Nov 24 21:46:57 crc kubenswrapper[4801]: I1124 21:46:57.787052 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" event={"ID":"c53246eb-91e4-40f1-b8e6-c76fdade9d9d","Type":"ContainerStarted","Data":"793dee8bcf6189ecd902c7d66063f49cf42c6b2b8b191c9415145857c51d7c75"} Nov 24 21:46:58 crc kubenswrapper[4801]: I1124 21:46:58.801173 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" event={"ID":"c53246eb-91e4-40f1-b8e6-c76fdade9d9d","Type":"ContainerStarted","Data":"e3d510a2b5eac3cec5a78f32f2d880961cf563cca2f199a389a0348a20c7bbac"} Nov 24 21:46:58 crc kubenswrapper[4801]: I1124 21:46:58.828441 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" podStartSLOduration=3.085840559 podStartE2EDuration="3.828414227s" podCreationTimestamp="2025-11-24 21:46:55 +0000 UTC" firstStartedPulling="2025-11-24 21:46:56.918702463 +0000 UTC m=+2389.001289133" lastFinishedPulling="2025-11-24 21:46:57.661276081 +0000 UTC m=+2389.743862801" observedRunningTime="2025-11-24 21:46:58.821236594 +0000 UTC m=+2390.903823274" watchObservedRunningTime="2025-11-24 21:46:58.828414227 +0000 UTC m=+2390.911000897" Nov 24 21:47:09 crc kubenswrapper[4801]: I1124 21:47:09.664621 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:47:09 crc kubenswrapper[4801]: E1124 21:47:09.668447 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:47:21 crc kubenswrapper[4801]: I1124 21:47:21.664065 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:47:21 crc kubenswrapper[4801]: E1124 21:47:21.664952 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:47:35 crc kubenswrapper[4801]: I1124 21:47:35.665114 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:47:35 crc kubenswrapper[4801]: E1124 21:47:35.669170 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:47:50 crc kubenswrapper[4801]: I1124 21:47:50.666120 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:47:50 crc kubenswrapper[4801]: E1124 21:47:50.667265 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:48:01 crc kubenswrapper[4801]: I1124 21:48:01.664945 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:48:01 crc kubenswrapper[4801]: E1124 21:48:01.666493 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:48:12 crc kubenswrapper[4801]: I1124 21:48:12.664750 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:48:12 crc kubenswrapper[4801]: E1124 21:48:12.665741 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:48:25 crc kubenswrapper[4801]: I1124 21:48:25.665391 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:48:25 crc kubenswrapper[4801]: E1124 21:48:25.666593 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:48:40 crc kubenswrapper[4801]: I1124 21:48:40.664919 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:48:40 crc kubenswrapper[4801]: E1124 21:48:40.665945 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:48:54 crc kubenswrapper[4801]: I1124 21:48:54.665127 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:48:55 crc kubenswrapper[4801]: I1124 21:48:55.471784 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"32b9ab0cad189d823794f1a6ccde15beb9b2587737564bd76bbed80dfc51667a"} Nov 24 21:50:54 crc kubenswrapper[4801]: I1124 21:50:54.319697 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:50:54 crc kubenswrapper[4801]: I1124 21:50:54.320257 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:51:24 crc kubenswrapper[4801]: I1124 21:51:24.320496 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:51:24 crc kubenswrapper[4801]: I1124 21:51:24.321227 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:51:43 crc kubenswrapper[4801]: I1124 21:51:43.030282 4801 generic.go:334] "Generic (PLEG): container finished" podID="c53246eb-91e4-40f1-b8e6-c76fdade9d9d" containerID="e3d510a2b5eac3cec5a78f32f2d880961cf563cca2f199a389a0348a20c7bbac" exitCode=0 Nov 24 21:51:43 crc kubenswrapper[4801]: I1124 21:51:43.030358 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" event={"ID":"c53246eb-91e4-40f1-b8e6-c76fdade9d9d","Type":"ContainerDied","Data":"e3d510a2b5eac3cec5a78f32f2d880961cf563cca2f199a389a0348a20c7bbac"} Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.628905 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.729192 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-ssh-key\") pod \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.729308 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-combined-ca-bundle\") pod \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.729459 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knxb4\" (UniqueName: \"kubernetes.io/projected/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-kube-api-access-knxb4\") pod \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.729518 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-inventory\") pod \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.729616 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-secret-0\") pod \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\" (UID: \"c53246eb-91e4-40f1-b8e6-c76fdade9d9d\") " Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.738640 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-kube-api-access-knxb4" (OuterVolumeSpecName: "kube-api-access-knxb4") pod "c53246eb-91e4-40f1-b8e6-c76fdade9d9d" (UID: "c53246eb-91e4-40f1-b8e6-c76fdade9d9d"). InnerVolumeSpecName "kube-api-access-knxb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.740650 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c53246eb-91e4-40f1-b8e6-c76fdade9d9d" (UID: "c53246eb-91e4-40f1-b8e6-c76fdade9d9d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.764776 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c53246eb-91e4-40f1-b8e6-c76fdade9d9d" (UID: "c53246eb-91e4-40f1-b8e6-c76fdade9d9d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.775825 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c53246eb-91e4-40f1-b8e6-c76fdade9d9d" (UID: "c53246eb-91e4-40f1-b8e6-c76fdade9d9d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.777351 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-inventory" (OuterVolumeSpecName: "inventory") pod "c53246eb-91e4-40f1-b8e6-c76fdade9d9d" (UID: "c53246eb-91e4-40f1-b8e6-c76fdade9d9d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.832591 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.832628 4801 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.832641 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knxb4\" (UniqueName: \"kubernetes.io/projected/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-kube-api-access-knxb4\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.832650 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:44 crc kubenswrapper[4801]: I1124 21:51:44.832660 4801 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c53246eb-91e4-40f1-b8e6-c76fdade9d9d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.057883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" event={"ID":"c53246eb-91e4-40f1-b8e6-c76fdade9d9d","Type":"ContainerDied","Data":"793dee8bcf6189ecd902c7d66063f49cf42c6b2b8b191c9415145857c51d7c75"} Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.057934 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793dee8bcf6189ecd902c7d66063f49cf42c6b2b8b191c9415145857c51d7c75" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.058014 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.192899 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d"] Nov 24 21:51:45 crc kubenswrapper[4801]: E1124 21:51:45.193846 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53246eb-91e4-40f1-b8e6-c76fdade9d9d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.193873 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53246eb-91e4-40f1-b8e6-c76fdade9d9d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.194187 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53246eb-91e4-40f1-b8e6-c76fdade9d9d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.195287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.198741 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.198741 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.201304 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.201467 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.201947 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.206602 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.206602 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.237481 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d"] Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.243604 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.243749 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.243778 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.243821 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.243885 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.243986 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.244049 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.244102 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3335f4ce-4e53-47f6-b241-792b016762da-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.244132 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjh92\" (UniqueName: \"kubernetes.io/projected/3335f4ce-4e53-47f6-b241-792b016762da-kube-api-access-wjh92\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.346908 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.346994 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3335f4ce-4e53-47f6-b241-792b016762da-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347022 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjh92\" (UniqueName: \"kubernetes.io/projected/3335f4ce-4e53-47f6-b241-792b016762da-kube-api-access-wjh92\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347160 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347252 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347277 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347334 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347406 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.347482 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.348807 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3335f4ce-4e53-47f6-b241-792b016762da-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.351405 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.352596 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.352678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.358066 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.358493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.359297 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.362014 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.371756 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjh92\" (UniqueName: \"kubernetes.io/projected/3335f4ce-4e53-47f6-b241-792b016762da-kube-api-access-wjh92\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cp65d\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:45 crc kubenswrapper[4801]: I1124 21:51:45.536453 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:51:46 crc kubenswrapper[4801]: I1124 21:51:46.177062 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d"] Nov 24 21:51:46 crc kubenswrapper[4801]: I1124 21:51:46.183253 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:51:47 crc kubenswrapper[4801]: I1124 21:51:47.083530 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" event={"ID":"3335f4ce-4e53-47f6-b241-792b016762da","Type":"ContainerStarted","Data":"ccafcff4ab808760867f3a80d26ca2b735ac60155cab11ef14e7a8dc39910fb6"} Nov 24 21:51:47 crc kubenswrapper[4801]: I1124 21:51:47.083932 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" event={"ID":"3335f4ce-4e53-47f6-b241-792b016762da","Type":"ContainerStarted","Data":"885bb2f1c71777ae4ca3208e9313f4f03607d089f908158977c6163cdb5a61ed"} Nov 24 21:51:47 crc kubenswrapper[4801]: I1124 21:51:47.104444 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" podStartSLOduration=1.5744283270000001 podStartE2EDuration="2.104422821s" podCreationTimestamp="2025-11-24 21:51:45 +0000 UTC" firstStartedPulling="2025-11-24 21:51:46.183031669 +0000 UTC m=+2678.265618339" lastFinishedPulling="2025-11-24 21:51:46.713026153 +0000 UTC m=+2678.795612833" observedRunningTime="2025-11-24 21:51:47.098119755 +0000 UTC m=+2679.180706425" watchObservedRunningTime="2025-11-24 21:51:47.104422821 +0000 UTC m=+2679.187009491" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.486722 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwkfn"] Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.493615 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.507777 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwkfn"] Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.597903 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8d5r\" (UniqueName: \"kubernetes.io/projected/8589f501-fd8c-4265-8330-9680a7d4ef89-kube-api-access-r8d5r\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.597965 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-utilities\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.598528 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-catalog-content\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.702257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-catalog-content\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.702979 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-catalog-content\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.703300 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8d5r\" (UniqueName: \"kubernetes.io/projected/8589f501-fd8c-4265-8330-9680a7d4ef89-kube-api-access-r8d5r\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.703388 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-utilities\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.703928 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-utilities\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.729272 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8d5r\" (UniqueName: \"kubernetes.io/projected/8589f501-fd8c-4265-8330-9680a7d4ef89-kube-api-access-r8d5r\") pod \"redhat-operators-jwkfn\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:53 crc kubenswrapper[4801]: I1124 21:51:53.815344 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:51:54 crc kubenswrapper[4801]: I1124 21:51:54.320454 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:51:54 crc kubenswrapper[4801]: I1124 21:51:54.320799 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:51:54 crc kubenswrapper[4801]: I1124 21:51:54.320853 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:51:54 crc kubenswrapper[4801]: I1124 21:51:54.322530 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32b9ab0cad189d823794f1a6ccde15beb9b2587737564bd76bbed80dfc51667a"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:51:54 crc kubenswrapper[4801]: I1124 21:51:54.322757 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://32b9ab0cad189d823794f1a6ccde15beb9b2587737564bd76bbed80dfc51667a" gracePeriod=600 Nov 24 21:51:54 crc kubenswrapper[4801]: I1124 21:51:54.370129 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwkfn"] Nov 24 21:51:54 crc kubenswrapper[4801]: W1124 21:51:54.375834 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8589f501_fd8c_4265_8330_9680a7d4ef89.slice/crio-d6223f8bf7184b059bd8aa095d724f41a8301bfbe086c14c091bae5fea487aae WatchSource:0}: Error finding container d6223f8bf7184b059bd8aa095d724f41a8301bfbe086c14c091bae5fea487aae: Status 404 returned error can't find the container with id d6223f8bf7184b059bd8aa095d724f41a8301bfbe086c14c091bae5fea487aae Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.184097 4801 generic.go:334] "Generic (PLEG): container finished" podID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerID="98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd" exitCode=0 Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.184212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerDied","Data":"98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd"} Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.184636 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerStarted","Data":"d6223f8bf7184b059bd8aa095d724f41a8301bfbe086c14c091bae5fea487aae"} Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.188399 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"32b9ab0cad189d823794f1a6ccde15beb9b2587737564bd76bbed80dfc51667a"} Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.188402 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="32b9ab0cad189d823794f1a6ccde15beb9b2587737564bd76bbed80dfc51667a" exitCode=0 Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.188461 4801 scope.go:117] "RemoveContainer" containerID="10c82162b0a5a333e4195140b0b3fd8785ba3331199cb09fc47d7ac9178b5c88" Nov 24 21:51:55 crc kubenswrapper[4801]: I1124 21:51:55.188514 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260"} Nov 24 21:51:56 crc kubenswrapper[4801]: I1124 21:51:56.210890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerStarted","Data":"a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f"} Nov 24 21:52:00 crc kubenswrapper[4801]: I1124 21:52:00.276048 4801 generic.go:334] "Generic (PLEG): container finished" podID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerID="a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f" exitCode=0 Nov 24 21:52:00 crc kubenswrapper[4801]: I1124 21:52:00.276124 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerDied","Data":"a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f"} Nov 24 21:52:01 crc kubenswrapper[4801]: I1124 21:52:01.290845 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerStarted","Data":"a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba"} Nov 24 21:52:01 crc kubenswrapper[4801]: I1124 21:52:01.332639 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwkfn" podStartSLOduration=2.543174219 podStartE2EDuration="8.332604475s" podCreationTimestamp="2025-11-24 21:51:53 +0000 UTC" firstStartedPulling="2025-11-24 21:51:55.188136313 +0000 UTC m=+2687.270723023" lastFinishedPulling="2025-11-24 21:52:00.977566569 +0000 UTC m=+2693.060153279" observedRunningTime="2025-11-24 21:52:01.321347774 +0000 UTC m=+2693.403934464" watchObservedRunningTime="2025-11-24 21:52:01.332604475 +0000 UTC m=+2693.415191185" Nov 24 21:52:03 crc kubenswrapper[4801]: I1124 21:52:03.816617 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:52:03 crc kubenswrapper[4801]: I1124 21:52:03.817301 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:52:04 crc kubenswrapper[4801]: I1124 21:52:04.887955 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwkfn" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="registry-server" probeResult="failure" output=< Nov 24 21:52:04 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 21:52:04 crc kubenswrapper[4801]: > Nov 24 21:52:13 crc kubenswrapper[4801]: I1124 21:52:13.900837 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:52:13 crc kubenswrapper[4801]: I1124 21:52:13.977230 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:52:14 crc kubenswrapper[4801]: I1124 21:52:14.155238 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwkfn"] Nov 24 21:52:15 crc kubenswrapper[4801]: I1124 21:52:15.466887 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jwkfn" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="registry-server" containerID="cri-o://a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba" gracePeriod=2 Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.045186 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.128244 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-utilities\") pod \"8589f501-fd8c-4265-8330-9680a7d4ef89\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.128315 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-catalog-content\") pod \"8589f501-fd8c-4265-8330-9680a7d4ef89\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.128618 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8d5r\" (UniqueName: \"kubernetes.io/projected/8589f501-fd8c-4265-8330-9680a7d4ef89-kube-api-access-r8d5r\") pod \"8589f501-fd8c-4265-8330-9680a7d4ef89\" (UID: \"8589f501-fd8c-4265-8330-9680a7d4ef89\") " Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.128985 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-utilities" (OuterVolumeSpecName: "utilities") pod "8589f501-fd8c-4265-8330-9680a7d4ef89" (UID: "8589f501-fd8c-4265-8330-9680a7d4ef89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.129463 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.143590 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8589f501-fd8c-4265-8330-9680a7d4ef89-kube-api-access-r8d5r" (OuterVolumeSpecName: "kube-api-access-r8d5r") pod "8589f501-fd8c-4265-8330-9680a7d4ef89" (UID: "8589f501-fd8c-4265-8330-9680a7d4ef89"). InnerVolumeSpecName "kube-api-access-r8d5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.232202 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8d5r\" (UniqueName: \"kubernetes.io/projected/8589f501-fd8c-4265-8330-9680a7d4ef89-kube-api-access-r8d5r\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.238111 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8589f501-fd8c-4265-8330-9680a7d4ef89" (UID: "8589f501-fd8c-4265-8330-9680a7d4ef89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.336065 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8589f501-fd8c-4265-8330-9680a7d4ef89-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.492775 4801 generic.go:334] "Generic (PLEG): container finished" podID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerID="a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba" exitCode=0 Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.492837 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerDied","Data":"a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba"} Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.492882 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkfn" event={"ID":"8589f501-fd8c-4265-8330-9680a7d4ef89","Type":"ContainerDied","Data":"d6223f8bf7184b059bd8aa095d724f41a8301bfbe086c14c091bae5fea487aae"} Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.492911 4801 scope.go:117] "RemoveContainer" containerID="a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.496031 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkfn" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.544917 4801 scope.go:117] "RemoveContainer" containerID="a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.587954 4801 scope.go:117] "RemoveContainer" containerID="98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.588603 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwkfn"] Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.604899 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jwkfn"] Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.656587 4801 scope.go:117] "RemoveContainer" containerID="a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba" Nov 24 21:52:16 crc kubenswrapper[4801]: E1124 21:52:16.657141 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba\": container with ID starting with a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba not found: ID does not exist" containerID="a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.657315 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba"} err="failed to get container status \"a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba\": rpc error: code = NotFound desc = could not find container \"a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba\": container with ID starting with a40bf9a3171b680388f89eeb9dcde9ba7e21a44d83e7d03d295a6bba8bfaccba not found: ID does not exist" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.657483 4801 scope.go:117] "RemoveContainer" containerID="a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f" Nov 24 21:52:16 crc kubenswrapper[4801]: E1124 21:52:16.658186 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f\": container with ID starting with a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f not found: ID does not exist" containerID="a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.658227 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f"} err="failed to get container status \"a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f\": rpc error: code = NotFound desc = could not find container \"a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f\": container with ID starting with a36a24e53b8fe877378158891d706e196743aff80718cd2640116c41fde7411f not found: ID does not exist" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.658256 4801 scope.go:117] "RemoveContainer" containerID="98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd" Nov 24 21:52:16 crc kubenswrapper[4801]: E1124 21:52:16.658663 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd\": container with ID starting with 98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd not found: ID does not exist" containerID="98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.658731 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd"} err="failed to get container status \"98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd\": rpc error: code = NotFound desc = could not find container \"98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd\": container with ID starting with 98c77d71d2889e8f4af9c70b4eb22adeade3b658da15dbcfa806967159201bbd not found: ID does not exist" Nov 24 21:52:16 crc kubenswrapper[4801]: I1124 21:52:16.688230 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" path="/var/lib/kubelet/pods/8589f501-fd8c-4265-8330-9680a7d4ef89/volumes" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.756261 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dqlbs"] Nov 24 21:52:37 crc kubenswrapper[4801]: E1124 21:52:37.757886 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="extract-utilities" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.757909 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="extract-utilities" Nov 24 21:52:37 crc kubenswrapper[4801]: E1124 21:52:37.757940 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="extract-content" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.757951 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="extract-content" Nov 24 21:52:37 crc kubenswrapper[4801]: E1124 21:52:37.757971 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="registry-server" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.757983 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="registry-server" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.758475 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8589f501-fd8c-4265-8330-9680a7d4ef89" containerName="registry-server" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.761441 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.774933 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqlbs"] Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.962541 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-catalog-content\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.962658 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxz4n\" (UniqueName: \"kubernetes.io/projected/13c83abc-9b5e-4d72-8606-8d0fdff505a2-kube-api-access-lxz4n\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:37 crc kubenswrapper[4801]: I1124 21:52:37.962800 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-utilities\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.065606 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-catalog-content\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.066061 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxz4n\" (UniqueName: \"kubernetes.io/projected/13c83abc-9b5e-4d72-8606-8d0fdff505a2-kube-api-access-lxz4n\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.066285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-utilities\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.066956 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-catalog-content\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.066960 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-utilities\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.100014 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxz4n\" (UniqueName: \"kubernetes.io/projected/13c83abc-9b5e-4d72-8606-8d0fdff505a2-kube-api-access-lxz4n\") pod \"certified-operators-dqlbs\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.104058 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.605136 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqlbs"] Nov 24 21:52:38 crc kubenswrapper[4801]: W1124 21:52:38.613057 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c83abc_9b5e_4d72_8606_8d0fdff505a2.slice/crio-c96f02f0cb039c2d803866e04d1deaa709c77fcb2aca5416d723d0b05cb22406 WatchSource:0}: Error finding container c96f02f0cb039c2d803866e04d1deaa709c77fcb2aca5416d723d0b05cb22406: Status 404 returned error can't find the container with id c96f02f0cb039c2d803866e04d1deaa709c77fcb2aca5416d723d0b05cb22406 Nov 24 21:52:38 crc kubenswrapper[4801]: I1124 21:52:38.823519 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerStarted","Data":"c96f02f0cb039c2d803866e04d1deaa709c77fcb2aca5416d723d0b05cb22406"} Nov 24 21:52:39 crc kubenswrapper[4801]: I1124 21:52:39.842734 4801 generic.go:334] "Generic (PLEG): container finished" podID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerID="da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187" exitCode=0 Nov 24 21:52:39 crc kubenswrapper[4801]: I1124 21:52:39.842802 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerDied","Data":"da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187"} Nov 24 21:52:40 crc kubenswrapper[4801]: I1124 21:52:40.858159 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerStarted","Data":"88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99"} Nov 24 21:52:41 crc kubenswrapper[4801]: I1124 21:52:41.875931 4801 generic.go:334] "Generic (PLEG): container finished" podID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerID="88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99" exitCode=0 Nov 24 21:52:41 crc kubenswrapper[4801]: I1124 21:52:41.876042 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerDied","Data":"88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99"} Nov 24 21:52:42 crc kubenswrapper[4801]: I1124 21:52:42.892235 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerStarted","Data":"284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206"} Nov 24 21:52:42 crc kubenswrapper[4801]: I1124 21:52:42.921960 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dqlbs" podStartSLOduration=3.470420852 podStartE2EDuration="5.921934433s" podCreationTimestamp="2025-11-24 21:52:37 +0000 UTC" firstStartedPulling="2025-11-24 21:52:39.845807381 +0000 UTC m=+2731.928394061" lastFinishedPulling="2025-11-24 21:52:42.297320972 +0000 UTC m=+2734.379907642" observedRunningTime="2025-11-24 21:52:42.915716709 +0000 UTC m=+2734.998303419" watchObservedRunningTime="2025-11-24 21:52:42.921934433 +0000 UTC m=+2735.004521123" Nov 24 21:52:48 crc kubenswrapper[4801]: I1124 21:52:48.104253 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:48 crc kubenswrapper[4801]: I1124 21:52:48.107003 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:48 crc kubenswrapper[4801]: I1124 21:52:48.194820 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:49 crc kubenswrapper[4801]: I1124 21:52:49.035701 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:49 crc kubenswrapper[4801]: I1124 21:52:49.101992 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqlbs"] Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.005452 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dqlbs" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="registry-server" containerID="cri-o://284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206" gracePeriod=2 Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.604354 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.775128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-utilities\") pod \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.775321 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-catalog-content\") pod \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.775743 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxz4n\" (UniqueName: \"kubernetes.io/projected/13c83abc-9b5e-4d72-8606-8d0fdff505a2-kube-api-access-lxz4n\") pod \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\" (UID: \"13c83abc-9b5e-4d72-8606-8d0fdff505a2\") " Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.776312 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-utilities" (OuterVolumeSpecName: "utilities") pod "13c83abc-9b5e-4d72-8606-8d0fdff505a2" (UID: "13c83abc-9b5e-4d72-8606-8d0fdff505a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.777297 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.786814 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c83abc-9b5e-4d72-8606-8d0fdff505a2-kube-api-access-lxz4n" (OuterVolumeSpecName: "kube-api-access-lxz4n") pod "13c83abc-9b5e-4d72-8606-8d0fdff505a2" (UID: "13c83abc-9b5e-4d72-8606-8d0fdff505a2"). InnerVolumeSpecName "kube-api-access-lxz4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.819421 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13c83abc-9b5e-4d72-8606-8d0fdff505a2" (UID: "13c83abc-9b5e-4d72-8606-8d0fdff505a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.880005 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c83abc-9b5e-4d72-8606-8d0fdff505a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4801]: I1124 21:52:51.880033 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxz4n\" (UniqueName: \"kubernetes.io/projected/13c83abc-9b5e-4d72-8606-8d0fdff505a2-kube-api-access-lxz4n\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.020756 4801 generic.go:334] "Generic (PLEG): container finished" podID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerID="284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206" exitCode=0 Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.020813 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerDied","Data":"284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206"} Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.020842 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqlbs" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.020883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqlbs" event={"ID":"13c83abc-9b5e-4d72-8606-8d0fdff505a2","Type":"ContainerDied","Data":"c96f02f0cb039c2d803866e04d1deaa709c77fcb2aca5416d723d0b05cb22406"} Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.020944 4801 scope.go:117] "RemoveContainer" containerID="284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.068688 4801 scope.go:117] "RemoveContainer" containerID="88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.080294 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqlbs"] Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.103277 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dqlbs"] Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.122537 4801 scope.go:117] "RemoveContainer" containerID="da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.182268 4801 scope.go:117] "RemoveContainer" containerID="284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206" Nov 24 21:52:52 crc kubenswrapper[4801]: E1124 21:52:52.182961 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206\": container with ID starting with 284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206 not found: ID does not exist" containerID="284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.183145 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206"} err="failed to get container status \"284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206\": rpc error: code = NotFound desc = could not find container \"284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206\": container with ID starting with 284fa002b1471d3e7cf04ddbd5a26260d90ea69c436f7b719d8351a093226206 not found: ID does not exist" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.183206 4801 scope.go:117] "RemoveContainer" containerID="88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99" Nov 24 21:52:52 crc kubenswrapper[4801]: E1124 21:52:52.183938 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99\": container with ID starting with 88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99 not found: ID does not exist" containerID="88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.183985 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99"} err="failed to get container status \"88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99\": rpc error: code = NotFound desc = could not find container \"88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99\": container with ID starting with 88556c9bd563096d87d5b1df4bdab8e3b635e3dfef943d5541c0fdef0ccd6e99 not found: ID does not exist" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.184007 4801 scope.go:117] "RemoveContainer" containerID="da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187" Nov 24 21:52:52 crc kubenswrapper[4801]: E1124 21:52:52.184314 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187\": container with ID starting with da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187 not found: ID does not exist" containerID="da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.184341 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187"} err="failed to get container status \"da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187\": rpc error: code = NotFound desc = could not find container \"da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187\": container with ID starting with da952554b539de79fd19554f2ab06ea898c98a1d20db3ba856a99c861d6bc187 not found: ID does not exist" Nov 24 21:52:52 crc kubenswrapper[4801]: I1124 21:52:52.696505 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" path="/var/lib/kubelet/pods/13c83abc-9b5e-4d72-8606-8d0fdff505a2/volumes" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.021723 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4fvg"] Nov 24 21:53:48 crc kubenswrapper[4801]: E1124 21:53:48.023244 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="extract-utilities" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.023264 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="extract-utilities" Nov 24 21:53:48 crc kubenswrapper[4801]: E1124 21:53:48.023294 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="registry-server" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.023302 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="registry-server" Nov 24 21:53:48 crc kubenswrapper[4801]: E1124 21:53:48.023347 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="extract-content" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.023356 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="extract-content" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.023705 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c83abc-9b5e-4d72-8606-8d0fdff505a2" containerName="registry-server" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.025968 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.054632 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zp9\" (UniqueName: \"kubernetes.io/projected/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-kube-api-access-96zp9\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.054789 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-utilities\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.054839 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-catalog-content\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.057082 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4fvg"] Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.157660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-utilities\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.158107 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-catalog-content\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.158268 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96zp9\" (UniqueName: \"kubernetes.io/projected/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-kube-api-access-96zp9\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.159121 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-utilities\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.159140 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-catalog-content\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.195511 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96zp9\" (UniqueName: \"kubernetes.io/projected/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-kube-api-access-96zp9\") pod \"redhat-marketplace-p4fvg\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.366563 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:48 crc kubenswrapper[4801]: I1124 21:53:48.893299 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4fvg"] Nov 24 21:53:49 crc kubenswrapper[4801]: I1124 21:53:49.821963 4801 generic.go:334] "Generic (PLEG): container finished" podID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerID="d13ca16c62595b328cb9f9522a7ce2e03507a8a183f3f033297c77c101628d93" exitCode=0 Nov 24 21:53:49 crc kubenswrapper[4801]: I1124 21:53:49.822068 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerDied","Data":"d13ca16c62595b328cb9f9522a7ce2e03507a8a183f3f033297c77c101628d93"} Nov 24 21:53:49 crc kubenswrapper[4801]: I1124 21:53:49.822349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerStarted","Data":"3323364834c2b42713a7a5ff503eb0d80b7b61ce4d64be7bdde76947549c7842"} Nov 24 21:53:50 crc kubenswrapper[4801]: I1124 21:53:50.835597 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerStarted","Data":"a1e390861496eb2af8c127ab93d2628d7fc6250a88cb9eff1fbe502cf4646791"} Nov 24 21:53:51 crc kubenswrapper[4801]: I1124 21:53:51.851501 4801 generic.go:334] "Generic (PLEG): container finished" podID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerID="a1e390861496eb2af8c127ab93d2628d7fc6250a88cb9eff1fbe502cf4646791" exitCode=0 Nov 24 21:53:51 crc kubenswrapper[4801]: I1124 21:53:51.851571 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerDied","Data":"a1e390861496eb2af8c127ab93d2628d7fc6250a88cb9eff1fbe502cf4646791"} Nov 24 21:53:52 crc kubenswrapper[4801]: I1124 21:53:52.869920 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerStarted","Data":"a504f08d0857b1ebc259d1c40b4d742776c5862ffea29537690e5b99b002d57e"} Nov 24 21:53:52 crc kubenswrapper[4801]: I1124 21:53:52.905021 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4fvg" podStartSLOduration=3.455273943 podStartE2EDuration="5.904995239s" podCreationTimestamp="2025-11-24 21:53:47 +0000 UTC" firstStartedPulling="2025-11-24 21:53:49.825589184 +0000 UTC m=+2801.908175854" lastFinishedPulling="2025-11-24 21:53:52.27531048 +0000 UTC m=+2804.357897150" observedRunningTime="2025-11-24 21:53:52.901777409 +0000 UTC m=+2804.984364109" watchObservedRunningTime="2025-11-24 21:53:52.904995239 +0000 UTC m=+2804.987581949" Nov 24 21:53:54 crc kubenswrapper[4801]: I1124 21:53:54.319969 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:54 crc kubenswrapper[4801]: I1124 21:53:54.320337 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:53:58 crc kubenswrapper[4801]: I1124 21:53:58.367500 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:58 crc kubenswrapper[4801]: I1124 21:53:58.368019 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:58 crc kubenswrapper[4801]: I1124 21:53:58.469236 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:59 crc kubenswrapper[4801]: I1124 21:53:59.033500 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:53:59 crc kubenswrapper[4801]: I1124 21:53:59.100531 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4fvg"] Nov 24 21:54:00 crc kubenswrapper[4801]: I1124 21:54:00.985841 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4fvg" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="registry-server" containerID="cri-o://a504f08d0857b1ebc259d1c40b4d742776c5862ffea29537690e5b99b002d57e" gracePeriod=2 Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.006343 4801 generic.go:334] "Generic (PLEG): container finished" podID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerID="a504f08d0857b1ebc259d1c40b4d742776c5862ffea29537690e5b99b002d57e" exitCode=0 Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.006408 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerDied","Data":"a504f08d0857b1ebc259d1c40b4d742776c5862ffea29537690e5b99b002d57e"} Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.006654 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4fvg" event={"ID":"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251","Type":"ContainerDied","Data":"3323364834c2b42713a7a5ff503eb0d80b7b61ce4d64be7bdde76947549c7842"} Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.006689 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3323364834c2b42713a7a5ff503eb0d80b7b61ce4d64be7bdde76947549c7842" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.087454 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.100037 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-catalog-content\") pod \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.100247 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96zp9\" (UniqueName: \"kubernetes.io/projected/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-kube-api-access-96zp9\") pod \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.100749 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-utilities\") pod \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\" (UID: \"f99c1b5f-90e5-4c2e-a612-adf4dc1fc251\") " Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.102361 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-utilities" (OuterVolumeSpecName: "utilities") pod "f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" (UID: "f99c1b5f-90e5-4c2e-a612-adf4dc1fc251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.124780 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-kube-api-access-96zp9" (OuterVolumeSpecName: "kube-api-access-96zp9") pod "f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" (UID: "f99c1b5f-90e5-4c2e-a612-adf4dc1fc251"). InnerVolumeSpecName "kube-api-access-96zp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.152949 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" (UID: "f99c1b5f-90e5-4c2e-a612-adf4dc1fc251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.203231 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.203496 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96zp9\" (UniqueName: \"kubernetes.io/projected/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-kube-api-access-96zp9\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:02 crc kubenswrapper[4801]: I1124 21:54:02.203508 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:03 crc kubenswrapper[4801]: I1124 21:54:03.025951 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4fvg" Nov 24 21:54:03 crc kubenswrapper[4801]: I1124 21:54:03.076620 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4fvg"] Nov 24 21:54:03 crc kubenswrapper[4801]: I1124 21:54:03.093984 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4fvg"] Nov 24 21:54:04 crc kubenswrapper[4801]: I1124 21:54:04.678666 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" path="/var/lib/kubelet/pods/f99c1b5f-90e5-4c2e-a612-adf4dc1fc251/volumes" Nov 24 21:54:24 crc kubenswrapper[4801]: I1124 21:54:24.320392 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:54:24 crc kubenswrapper[4801]: I1124 21:54:24.321061 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.320123 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.320878 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.320971 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.322661 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.322766 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" gracePeriod=600 Nov 24 21:54:54 crc kubenswrapper[4801]: E1124 21:54:54.456300 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.747777 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" exitCode=0 Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.747928 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260"} Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.748548 4801 scope.go:117] "RemoveContainer" containerID="32b9ab0cad189d823794f1a6ccde15beb9b2587737564bd76bbed80dfc51667a" Nov 24 21:54:54 crc kubenswrapper[4801]: I1124 21:54:54.750282 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:54:54 crc kubenswrapper[4801]: E1124 21:54:54.750941 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:55:02 crc kubenswrapper[4801]: I1124 21:55:02.862253 4801 generic.go:334] "Generic (PLEG): container finished" podID="3335f4ce-4e53-47f6-b241-792b016762da" containerID="ccafcff4ab808760867f3a80d26ca2b735ac60155cab11ef14e7a8dc39910fb6" exitCode=0 Nov 24 21:55:02 crc kubenswrapper[4801]: I1124 21:55:02.862795 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" event={"ID":"3335f4ce-4e53-47f6-b241-792b016762da","Type":"ContainerDied","Data":"ccafcff4ab808760867f3a80d26ca2b735ac60155cab11ef14e7a8dc39910fb6"} Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.406410 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.497997 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-ssh-key\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498182 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-0\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498255 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-1\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498466 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3335f4ce-4e53-47f6-b241-792b016762da-nova-extra-config-0\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498564 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-1\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498621 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-0\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498655 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-inventory\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498770 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjh92\" (UniqueName: \"kubernetes.io/projected/3335f4ce-4e53-47f6-b241-792b016762da-kube-api-access-wjh92\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.498836 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-combined-ca-bundle\") pod \"3335f4ce-4e53-47f6-b241-792b016762da\" (UID: \"3335f4ce-4e53-47f6-b241-792b016762da\") " Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.504890 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.529013 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3335f4ce-4e53-47f6-b241-792b016762da-kube-api-access-wjh92" (OuterVolumeSpecName: "kube-api-access-wjh92") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "kube-api-access-wjh92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.531188 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3335f4ce-4e53-47f6-b241-792b016762da-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.535747 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.537828 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-inventory" (OuterVolumeSpecName: "inventory") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.545724 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.550991 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.563596 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.581595 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3335f4ce-4e53-47f6-b241-792b016762da" (UID: "3335f4ce-4e53-47f6-b241-792b016762da"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603437 4801 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3335f4ce-4e53-47f6-b241-792b016762da-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603477 4801 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603493 4801 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603506 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603518 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjh92\" (UniqueName: \"kubernetes.io/projected/3335f4ce-4e53-47f6-b241-792b016762da-kube-api-access-wjh92\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603528 4801 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603539 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603550 4801 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.603561 4801 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3335f4ce-4e53-47f6-b241-792b016762da-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.895501 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" event={"ID":"3335f4ce-4e53-47f6-b241-792b016762da","Type":"ContainerDied","Data":"885bb2f1c71777ae4ca3208e9313f4f03607d089f908158977c6163cdb5a61ed"} Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.895597 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885bb2f1c71777ae4ca3208e9313f4f03607d089f908158977c6163cdb5a61ed" Nov 24 21:55:04 crc kubenswrapper[4801]: I1124 21:55:04.895756 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cp65d" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.029446 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh"] Nov 24 21:55:05 crc kubenswrapper[4801]: E1124 21:55:05.030683 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="registry-server" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.030825 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="registry-server" Nov 24 21:55:05 crc kubenswrapper[4801]: E1124 21:55:05.030974 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="extract-utilities" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.031083 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="extract-utilities" Nov 24 21:55:05 crc kubenswrapper[4801]: E1124 21:55:05.031202 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="extract-content" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.031305 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="extract-content" Nov 24 21:55:05 crc kubenswrapper[4801]: E1124 21:55:05.031499 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3335f4ce-4e53-47f6-b241-792b016762da" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.031619 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3335f4ce-4e53-47f6-b241-792b016762da" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.032223 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3335f4ce-4e53-47f6-b241-792b016762da" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.032404 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99c1b5f-90e5-4c2e-a612-adf4dc1fc251" containerName="registry-server" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.034082 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.037608 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.037744 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.050450 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.050768 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.051114 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.063449 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh"] Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.117798 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzsm\" (UniqueName: \"kubernetes.io/projected/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-kube-api-access-bvzsm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.118148 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.118441 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.118539 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.118878 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.118985 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.119028 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.221999 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.222116 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.222277 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.222334 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.222403 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.222479 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzsm\" (UniqueName: \"kubernetes.io/projected/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-kube-api-access-bvzsm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.222640 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.227850 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.227893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.228007 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.228872 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.230093 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.230188 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.252199 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzsm\" (UniqueName: \"kubernetes.io/projected/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-kube-api-access-bvzsm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wbngh\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:05 crc kubenswrapper[4801]: I1124 21:55:05.374243 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:55:06 crc kubenswrapper[4801]: I1124 21:55:05.999678 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh"] Nov 24 21:55:06 crc kubenswrapper[4801]: I1124 21:55:06.935696 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" event={"ID":"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd","Type":"ContainerStarted","Data":"c5f3073425dccb5fc3b9783303ce930eab21d86d895ea3ff0f78738c474082e9"} Nov 24 21:55:06 crc kubenswrapper[4801]: I1124 21:55:06.936754 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" event={"ID":"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd","Type":"ContainerStarted","Data":"e2768a16cc59aa53bdf53dd89da3aacec080c1522c19fd450c1883081a06efde"} Nov 24 21:55:06 crc kubenswrapper[4801]: I1124 21:55:06.973856 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" podStartSLOduration=2.553807709 podStartE2EDuration="2.973833839s" podCreationTimestamp="2025-11-24 21:55:04 +0000 UTC" firstStartedPulling="2025-11-24 21:55:06.007910269 +0000 UTC m=+2878.090496939" lastFinishedPulling="2025-11-24 21:55:06.427936359 +0000 UTC m=+2878.510523069" observedRunningTime="2025-11-24 21:55:06.962143875 +0000 UTC m=+2879.044730585" watchObservedRunningTime="2025-11-24 21:55:06.973833839 +0000 UTC m=+2879.056420519" Nov 24 21:55:07 crc kubenswrapper[4801]: I1124 21:55:07.665048 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:55:07 crc kubenswrapper[4801]: E1124 21:55:07.667264 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:55:18 crc kubenswrapper[4801]: I1124 21:55:18.682032 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:55:18 crc kubenswrapper[4801]: E1124 21:55:18.683022 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:55:33 crc kubenswrapper[4801]: I1124 21:55:33.665037 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:55:33 crc kubenswrapper[4801]: E1124 21:55:33.666497 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:55:44 crc kubenswrapper[4801]: I1124 21:55:44.664500 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:55:44 crc kubenswrapper[4801]: E1124 21:55:44.665759 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:55:59 crc kubenswrapper[4801]: I1124 21:55:59.664813 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:55:59 crc kubenswrapper[4801]: E1124 21:55:59.666179 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:56:12 crc kubenswrapper[4801]: I1124 21:56:12.664655 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:56:12 crc kubenswrapper[4801]: E1124 21:56:12.666161 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:56:23 crc kubenswrapper[4801]: I1124 21:56:23.665114 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:56:23 crc kubenswrapper[4801]: E1124 21:56:23.666056 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:56:34 crc kubenswrapper[4801]: I1124 21:56:34.664695 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:56:34 crc kubenswrapper[4801]: E1124 21:56:34.666473 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:56:46 crc kubenswrapper[4801]: I1124 21:56:46.664058 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:56:46 crc kubenswrapper[4801]: E1124 21:56:46.666037 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:57:01 crc kubenswrapper[4801]: I1124 21:57:01.665014 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:57:01 crc kubenswrapper[4801]: E1124 21:57:01.666263 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:57:13 crc kubenswrapper[4801]: I1124 21:57:13.664218 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:57:13 crc kubenswrapper[4801]: E1124 21:57:13.665258 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:57:25 crc kubenswrapper[4801]: I1124 21:57:25.664225 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:57:25 crc kubenswrapper[4801]: E1124 21:57:25.665422 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:57:38 crc kubenswrapper[4801]: I1124 21:57:38.675041 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:57:38 crc kubenswrapper[4801]: E1124 21:57:38.676166 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:57:48 crc kubenswrapper[4801]: I1124 21:57:48.195850 4801 generic.go:334] "Generic (PLEG): container finished" podID="ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" containerID="c5f3073425dccb5fc3b9783303ce930eab21d86d895ea3ff0f78738c474082e9" exitCode=0 Nov 24 21:57:48 crc kubenswrapper[4801]: I1124 21:57:48.195935 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" event={"ID":"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd","Type":"ContainerDied","Data":"c5f3073425dccb5fc3b9783303ce930eab21d86d895ea3ff0f78738c474082e9"} Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.664973 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:57:49 crc kubenswrapper[4801]: E1124 21:57:49.665412 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.736991 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.762758 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-2\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.762866 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-0\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.763089 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-1\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.763341 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-inventory\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.763383 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ssh-key\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.763669 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-telemetry-combined-ca-bundle\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.763711 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzsm\" (UniqueName: \"kubernetes.io/projected/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-kube-api-access-bvzsm\") pod \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\" (UID: \"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd\") " Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.788757 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.806502 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-kube-api-access-bvzsm" (OuterVolumeSpecName: "kube-api-access-bvzsm") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "kube-api-access-bvzsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.859764 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.864586 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-inventory" (OuterVolumeSpecName: "inventory") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.866159 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.866192 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.866203 4801 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.866213 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzsm\" (UniqueName: \"kubernetes.io/projected/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-kube-api-access-bvzsm\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.900572 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.909301 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.923526 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" (UID: "ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.974247 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.974511 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:49 crc kubenswrapper[4801]: I1124 21:57:49.974567 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.227981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" event={"ID":"ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd","Type":"ContainerDied","Data":"e2768a16cc59aa53bdf53dd89da3aacec080c1522c19fd450c1883081a06efde"} Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.228049 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2768a16cc59aa53bdf53dd89da3aacec080c1522c19fd450c1883081a06efde" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.228089 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wbngh" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.369446 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf"] Nov 24 21:57:50 crc kubenswrapper[4801]: E1124 21:57:50.370213 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.370240 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.370560 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.371707 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.375022 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.376895 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.377831 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.378499 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.379245 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.383565 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf"] Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385022 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385084 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg692\" (UniqueName: \"kubernetes.io/projected/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-kube-api-access-vg692\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385233 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385330 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385427 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385493 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.385515 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.487835 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.487900 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg692\" (UniqueName: \"kubernetes.io/projected/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-kube-api-access-vg692\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.487987 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.488043 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.488079 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.488110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.488133 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.492549 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.492585 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.493105 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.493861 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.494071 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.494257 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.506905 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg692\" (UniqueName: \"kubernetes.io/projected/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-kube-api-access-vg692\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:50 crc kubenswrapper[4801]: I1124 21:57:50.689102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 21:57:51 crc kubenswrapper[4801]: I1124 21:57:51.312313 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf"] Nov 24 21:57:51 crc kubenswrapper[4801]: I1124 21:57:51.315607 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:57:52 crc kubenswrapper[4801]: I1124 21:57:52.251698 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" event={"ID":"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb","Type":"ContainerStarted","Data":"d605ceec462e48a640b1e4aeefac4c560e0f068a20f1bdf199d541826f4894a7"} Nov 24 21:57:52 crc kubenswrapper[4801]: I1124 21:57:52.252216 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" event={"ID":"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb","Type":"ContainerStarted","Data":"bf5da783ca6cc2a16b4f9f30b37a5fa11cbb5259fc38f9d287cde32308417d66"} Nov 24 21:57:52 crc kubenswrapper[4801]: I1124 21:57:52.283457 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" podStartSLOduration=1.80788744 podStartE2EDuration="2.283430794s" podCreationTimestamp="2025-11-24 21:57:50 +0000 UTC" firstStartedPulling="2025-11-24 21:57:51.31539263 +0000 UTC m=+3043.397979290" lastFinishedPulling="2025-11-24 21:57:51.790935964 +0000 UTC m=+3043.873522644" observedRunningTime="2025-11-24 21:57:52.272190644 +0000 UTC m=+3044.354777324" watchObservedRunningTime="2025-11-24 21:57:52.283430794 +0000 UTC m=+3044.366017474" Nov 24 21:58:00 crc kubenswrapper[4801]: I1124 21:58:00.664655 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:58:00 crc kubenswrapper[4801]: E1124 21:58:00.665941 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:58:13 crc kubenswrapper[4801]: I1124 21:58:13.664815 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:58:13 crc kubenswrapper[4801]: E1124 21:58:13.666478 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:58:27 crc kubenswrapper[4801]: I1124 21:58:27.665754 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:58:27 crc kubenswrapper[4801]: E1124 21:58:27.667192 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:58:39 crc kubenswrapper[4801]: I1124 21:58:39.664915 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:58:39 crc kubenswrapper[4801]: E1124 21:58:39.666170 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:58:53 crc kubenswrapper[4801]: I1124 21:58:53.665245 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:58:53 crc kubenswrapper[4801]: E1124 21:58:53.666556 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:59:07 crc kubenswrapper[4801]: I1124 21:59:07.665217 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:59:07 crc kubenswrapper[4801]: E1124 21:59:07.666376 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:59:21 crc kubenswrapper[4801]: I1124 21:59:21.664941 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:59:21 crc kubenswrapper[4801]: E1124 21:59:21.666106 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:59:36 crc kubenswrapper[4801]: I1124 21:59:36.665492 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:59:36 crc kubenswrapper[4801]: E1124 21:59:36.666658 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 21:59:50 crc kubenswrapper[4801]: I1124 21:59:50.664243 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 21:59:50 crc kubenswrapper[4801]: E1124 21:59:50.665617 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.184642 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92"] Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.186998 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.190145 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.190146 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.201356 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92"] Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.299038 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvf5v\" (UniqueName: \"kubernetes.io/projected/1497ed80-3ec3-4646-8087-54baca8015da-kube-api-access-zvf5v\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.299600 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1497ed80-3ec3-4646-8087-54baca8015da-secret-volume\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.299731 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1497ed80-3ec3-4646-8087-54baca8015da-config-volume\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.402897 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1497ed80-3ec3-4646-8087-54baca8015da-secret-volume\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.403078 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1497ed80-3ec3-4646-8087-54baca8015da-config-volume\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.403283 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvf5v\" (UniqueName: \"kubernetes.io/projected/1497ed80-3ec3-4646-8087-54baca8015da-kube-api-access-zvf5v\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.404035 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1497ed80-3ec3-4646-8087-54baca8015da-config-volume\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.411214 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1497ed80-3ec3-4646-8087-54baca8015da-secret-volume\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.436721 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvf5v\" (UniqueName: \"kubernetes.io/projected/1497ed80-3ec3-4646-8087-54baca8015da-kube-api-access-zvf5v\") pod \"collect-profiles-29400360-vgs92\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:00 crc kubenswrapper[4801]: I1124 22:00:00.526891 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:01 crc kubenswrapper[4801]: I1124 22:00:01.076065 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92"] Nov 24 22:00:01 crc kubenswrapper[4801]: I1124 22:00:01.124144 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" event={"ID":"1497ed80-3ec3-4646-8087-54baca8015da","Type":"ContainerStarted","Data":"d31311ce8c5d9cf4c5ace666fdcfb27a6f1ff87ff5eda37b783dfb4b58d69fff"} Nov 24 22:00:02 crc kubenswrapper[4801]: I1124 22:00:02.153524 4801 generic.go:334] "Generic (PLEG): container finished" podID="1497ed80-3ec3-4646-8087-54baca8015da" containerID="2b93a35f1bac34d8f4eb45443e4a0c90ef8e9a0451b0cd4c611484f0659dad93" exitCode=0 Nov 24 22:00:02 crc kubenswrapper[4801]: I1124 22:00:02.153838 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" event={"ID":"1497ed80-3ec3-4646-8087-54baca8015da","Type":"ContainerDied","Data":"2b93a35f1bac34d8f4eb45443e4a0c90ef8e9a0451b0cd4c611484f0659dad93"} Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.631102 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.664050 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.702442 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1497ed80-3ec3-4646-8087-54baca8015da-secret-volume\") pod \"1497ed80-3ec3-4646-8087-54baca8015da\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.711739 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1497ed80-3ec3-4646-8087-54baca8015da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1497ed80-3ec3-4646-8087-54baca8015da" (UID: "1497ed80-3ec3-4646-8087-54baca8015da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.804096 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvf5v\" (UniqueName: \"kubernetes.io/projected/1497ed80-3ec3-4646-8087-54baca8015da-kube-api-access-zvf5v\") pod \"1497ed80-3ec3-4646-8087-54baca8015da\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.804508 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1497ed80-3ec3-4646-8087-54baca8015da-config-volume\") pod \"1497ed80-3ec3-4646-8087-54baca8015da\" (UID: \"1497ed80-3ec3-4646-8087-54baca8015da\") " Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.805482 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1497ed80-3ec3-4646-8087-54baca8015da-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.806031 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1497ed80-3ec3-4646-8087-54baca8015da-config-volume" (OuterVolumeSpecName: "config-volume") pod "1497ed80-3ec3-4646-8087-54baca8015da" (UID: "1497ed80-3ec3-4646-8087-54baca8015da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.812200 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1497ed80-3ec3-4646-8087-54baca8015da-kube-api-access-zvf5v" (OuterVolumeSpecName: "kube-api-access-zvf5v") pod "1497ed80-3ec3-4646-8087-54baca8015da" (UID: "1497ed80-3ec3-4646-8087-54baca8015da"). InnerVolumeSpecName "kube-api-access-zvf5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.911672 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvf5v\" (UniqueName: \"kubernetes.io/projected/1497ed80-3ec3-4646-8087-54baca8015da-kube-api-access-zvf5v\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:03 crc kubenswrapper[4801]: I1124 22:00:03.911720 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1497ed80-3ec3-4646-8087-54baca8015da-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4801]: I1124 22:00:04.192886 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" Nov 24 22:00:04 crc kubenswrapper[4801]: I1124 22:00:04.192894 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92" event={"ID":"1497ed80-3ec3-4646-8087-54baca8015da","Type":"ContainerDied","Data":"d31311ce8c5d9cf4c5ace666fdcfb27a6f1ff87ff5eda37b783dfb4b58d69fff"} Nov 24 22:00:04 crc kubenswrapper[4801]: I1124 22:00:04.193465 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31311ce8c5d9cf4c5ace666fdcfb27a6f1ff87ff5eda37b783dfb4b58d69fff" Nov 24 22:00:04 crc kubenswrapper[4801]: I1124 22:00:04.203387 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"115186694d7815471b4ebd7ee5e70560cfae1082fcf87013c124f5edc6da064b"} Nov 24 22:00:04 crc kubenswrapper[4801]: I1124 22:00:04.724661 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws"] Nov 24 22:00:04 crc kubenswrapper[4801]: I1124 22:00:04.747790 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-qvhws"] Nov 24 22:00:06 crc kubenswrapper[4801]: I1124 22:00:06.693811 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915b6a61-708c-481b-9fd3-42124c0449bd" path="/var/lib/kubelet/pods/915b6a61-708c-481b-9fd3-42124c0449bd/volumes" Nov 24 22:00:13 crc kubenswrapper[4801]: I1124 22:00:13.322334 4801 generic.go:334] "Generic (PLEG): container finished" podID="0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" containerID="d605ceec462e48a640b1e4aeefac4c560e0f068a20f1bdf199d541826f4894a7" exitCode=0 Nov 24 22:00:13 crc kubenswrapper[4801]: I1124 22:00:13.323140 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" event={"ID":"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb","Type":"ContainerDied","Data":"d605ceec462e48a640b1e4aeefac4c560e0f068a20f1bdf199d541826f4894a7"} Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.829581 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.959704 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-0\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.959815 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ssh-key\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.959887 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg692\" (UniqueName: \"kubernetes.io/projected/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-kube-api-access-vg692\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.959919 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-1\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.959938 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-inventory\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.960021 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-telemetry-power-monitoring-combined-ca-bundle\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.960060 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-2\") pod \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\" (UID: \"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb\") " Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.965228 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.966195 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-kube-api-access-vg692" (OuterVolumeSpecName: "kube-api-access-vg692") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "kube-api-access-vg692". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.992501 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-inventory" (OuterVolumeSpecName: "inventory") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:14 crc kubenswrapper[4801]: I1124 22:00:14.993639 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:14.999581 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.009061 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.013295 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" (UID: "0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063483 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063518 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063529 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063541 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg692\" (UniqueName: \"kubernetes.io/projected/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-kube-api-access-vg692\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063550 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063560 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.063571 4801 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.349853 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" event={"ID":"0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb","Type":"ContainerDied","Data":"bf5da783ca6cc2a16b4f9f30b37a5fa11cbb5259fc38f9d287cde32308417d66"} Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.349899 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf5da783ca6cc2a16b4f9f30b37a5fa11cbb5259fc38f9d287cde32308417d66" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.349957 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.439159 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5"] Nov 24 22:00:15 crc kubenswrapper[4801]: E1124 22:00:15.439962 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1497ed80-3ec3-4646-8087-54baca8015da" containerName="collect-profiles" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.440045 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1497ed80-3ec3-4646-8087-54baca8015da" containerName="collect-profiles" Nov 24 22:00:15 crc kubenswrapper[4801]: E1124 22:00:15.440096 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.440146 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.440449 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.440526 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1497ed80-3ec3-4646-8087-54baca8015da" containerName="collect-profiles" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.441841 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.445467 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.445973 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.446257 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.446462 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j58wb" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.446651 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.463688 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5"] Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.472516 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.472729 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.472894 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.473130 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkjh\" (UniqueName: \"kubernetes.io/projected/eb36beab-725c-4fa1-a960-47e85a2554a7-kube-api-access-2gkjh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.473230 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.576409 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.576534 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.576563 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.576659 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.576804 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gkjh\" (UniqueName: \"kubernetes.io/projected/eb36beab-725c-4fa1-a960-47e85a2554a7-kube-api-access-2gkjh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.580698 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.580751 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.581084 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.581920 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.595646 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gkjh\" (UniqueName: \"kubernetes.io/projected/eb36beab-725c-4fa1-a960-47e85a2554a7-kube-api-access-2gkjh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-w7cc5\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:15 crc kubenswrapper[4801]: I1124 22:00:15.778969 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:16 crc kubenswrapper[4801]: I1124 22:00:16.398916 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5"] Nov 24 22:00:17 crc kubenswrapper[4801]: I1124 22:00:17.373610 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" event={"ID":"eb36beab-725c-4fa1-a960-47e85a2554a7","Type":"ContainerStarted","Data":"06094fe7e90e5d0598fda48a0305d381ec9146a98f4f70b8d25737078d37431f"} Nov 24 22:00:17 crc kubenswrapper[4801]: I1124 22:00:17.374004 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" event={"ID":"eb36beab-725c-4fa1-a960-47e85a2554a7","Type":"ContainerStarted","Data":"9b1a602d593ff75e033f3e4c0729ad33d4f1cfc256ae519b3f8b2cd2104a9a16"} Nov 24 22:00:17 crc kubenswrapper[4801]: I1124 22:00:17.399595 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" podStartSLOduration=1.984710135 podStartE2EDuration="2.399557309s" podCreationTimestamp="2025-11-24 22:00:15 +0000 UTC" firstStartedPulling="2025-11-24 22:00:16.410189532 +0000 UTC m=+3188.492776202" lastFinishedPulling="2025-11-24 22:00:16.825036696 +0000 UTC m=+3188.907623376" observedRunningTime="2025-11-24 22:00:17.399060734 +0000 UTC m=+3189.481647414" watchObservedRunningTime="2025-11-24 22:00:17.399557309 +0000 UTC m=+3189.482143989" Nov 24 22:00:31 crc kubenswrapper[4801]: I1124 22:00:31.932682 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zznsx"] Nov 24 22:00:31 crc kubenswrapper[4801]: I1124 22:00:31.936307 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:31 crc kubenswrapper[4801]: I1124 22:00:31.948279 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zznsx"] Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.091542 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fx4\" (UniqueName: \"kubernetes.io/projected/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-kube-api-access-72fx4\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.091626 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-utilities\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.091731 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-catalog-content\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.193562 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72fx4\" (UniqueName: \"kubernetes.io/projected/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-kube-api-access-72fx4\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.193905 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-utilities\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.193992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-catalog-content\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.194647 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-catalog-content\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.194639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-utilities\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.231323 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72fx4\" (UniqueName: \"kubernetes.io/projected/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-kube-api-access-72fx4\") pod \"community-operators-zznsx\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.272353 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:32 crc kubenswrapper[4801]: I1124 22:00:32.851004 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zznsx"] Nov 24 22:00:32 crc kubenswrapper[4801]: W1124 22:00:32.852409 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b10ab0_e35d_4f9c_9a0a_651dea3cf697.slice/crio-dc8e4850bc1e5e3032628af0702fe68659beb3a9956c6937825e6d1619072c3d WatchSource:0}: Error finding container dc8e4850bc1e5e3032628af0702fe68659beb3a9956c6937825e6d1619072c3d: Status 404 returned error can't find the container with id dc8e4850bc1e5e3032628af0702fe68659beb3a9956c6937825e6d1619072c3d Nov 24 22:00:33 crc kubenswrapper[4801]: I1124 22:00:33.582521 4801 generic.go:334] "Generic (PLEG): container finished" podID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerID="587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9" exitCode=0 Nov 24 22:00:33 crc kubenswrapper[4801]: I1124 22:00:33.582598 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerDied","Data":"587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9"} Nov 24 22:00:33 crc kubenswrapper[4801]: I1124 22:00:33.583089 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerStarted","Data":"dc8e4850bc1e5e3032628af0702fe68659beb3a9956c6937825e6d1619072c3d"} Nov 24 22:00:33 crc kubenswrapper[4801]: I1124 22:00:33.586017 4801 generic.go:334] "Generic (PLEG): container finished" podID="eb36beab-725c-4fa1-a960-47e85a2554a7" containerID="06094fe7e90e5d0598fda48a0305d381ec9146a98f4f70b8d25737078d37431f" exitCode=0 Nov 24 22:00:33 crc kubenswrapper[4801]: I1124 22:00:33.586100 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" event={"ID":"eb36beab-725c-4fa1-a960-47e85a2554a7","Type":"ContainerDied","Data":"06094fe7e90e5d0598fda48a0305d381ec9146a98f4f70b8d25737078d37431f"} Nov 24 22:00:34 crc kubenswrapper[4801]: I1124 22:00:34.602203 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerStarted","Data":"2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0"} Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.167965 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.287646 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-1\") pod \"eb36beab-725c-4fa1-a960-47e85a2554a7\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.288108 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-ssh-key\") pod \"eb36beab-725c-4fa1-a960-47e85a2554a7\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.288475 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-inventory\") pod \"eb36beab-725c-4fa1-a960-47e85a2554a7\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.288814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gkjh\" (UniqueName: \"kubernetes.io/projected/eb36beab-725c-4fa1-a960-47e85a2554a7-kube-api-access-2gkjh\") pod \"eb36beab-725c-4fa1-a960-47e85a2554a7\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.289166 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-0\") pod \"eb36beab-725c-4fa1-a960-47e85a2554a7\" (UID: \"eb36beab-725c-4fa1-a960-47e85a2554a7\") " Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.293585 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb36beab-725c-4fa1-a960-47e85a2554a7-kube-api-access-2gkjh" (OuterVolumeSpecName: "kube-api-access-2gkjh") pod "eb36beab-725c-4fa1-a960-47e85a2554a7" (UID: "eb36beab-725c-4fa1-a960-47e85a2554a7"). InnerVolumeSpecName "kube-api-access-2gkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.330446 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "eb36beab-725c-4fa1-a960-47e85a2554a7" (UID: "eb36beab-725c-4fa1-a960-47e85a2554a7"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.332634 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-inventory" (OuterVolumeSpecName: "inventory") pod "eb36beab-725c-4fa1-a960-47e85a2554a7" (UID: "eb36beab-725c-4fa1-a960-47e85a2554a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.332669 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb36beab-725c-4fa1-a960-47e85a2554a7" (UID: "eb36beab-725c-4fa1-a960-47e85a2554a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.355073 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "eb36beab-725c-4fa1-a960-47e85a2554a7" (UID: "eb36beab-725c-4fa1-a960-47e85a2554a7"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.394264 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.394310 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gkjh\" (UniqueName: \"kubernetes.io/projected/eb36beab-725c-4fa1-a960-47e85a2554a7-kube-api-access-2gkjh\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.394325 4801 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.394335 4801 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.394349 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb36beab-725c-4fa1-a960-47e85a2554a7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.618052 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.618071 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-w7cc5" event={"ID":"eb36beab-725c-4fa1-a960-47e85a2554a7","Type":"ContainerDied","Data":"9b1a602d593ff75e033f3e4c0729ad33d4f1cfc256ae519b3f8b2cd2104a9a16"} Nov 24 22:00:35 crc kubenswrapper[4801]: I1124 22:00:35.618733 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1a602d593ff75e033f3e4c0729ad33d4f1cfc256ae519b3f8b2cd2104a9a16" Nov 24 22:00:36 crc kubenswrapper[4801]: I1124 22:00:36.635398 4801 generic.go:334] "Generic (PLEG): container finished" podID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerID="2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0" exitCode=0 Nov 24 22:00:36 crc kubenswrapper[4801]: I1124 22:00:36.635508 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerDied","Data":"2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0"} Nov 24 22:00:37 crc kubenswrapper[4801]: I1124 22:00:37.650142 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerStarted","Data":"d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26"} Nov 24 22:00:37 crc kubenswrapper[4801]: I1124 22:00:37.683679 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zznsx" podStartSLOduration=3.154261139 podStartE2EDuration="6.683654554s" podCreationTimestamp="2025-11-24 22:00:31 +0000 UTC" firstStartedPulling="2025-11-24 22:00:33.585423792 +0000 UTC m=+3205.668010502" lastFinishedPulling="2025-11-24 22:00:37.114817237 +0000 UTC m=+3209.197403917" observedRunningTime="2025-11-24 22:00:37.679205936 +0000 UTC m=+3209.761792606" watchObservedRunningTime="2025-11-24 22:00:37.683654554 +0000 UTC m=+3209.766241234" Nov 24 22:00:42 crc kubenswrapper[4801]: I1124 22:00:42.272597 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:42 crc kubenswrapper[4801]: I1124 22:00:42.273238 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:43 crc kubenswrapper[4801]: I1124 22:00:43.345002 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zznsx" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="registry-server" probeResult="failure" output=< Nov 24 22:00:43 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:00:43 crc kubenswrapper[4801]: > Nov 24 22:00:48 crc kubenswrapper[4801]: I1124 22:00:48.768246 4801 scope.go:117] "RemoveContainer" containerID="ad24324ea40a99984b9807a7f27c2e361279e2e669f5065e3f58b90353659e61" Nov 24 22:00:48 crc kubenswrapper[4801]: I1124 22:00:48.832738 4801 scope.go:117] "RemoveContainer" containerID="d13ca16c62595b328cb9f9522a7ce2e03507a8a183f3f033297c77c101628d93" Nov 24 22:00:48 crc kubenswrapper[4801]: I1124 22:00:48.892671 4801 scope.go:117] "RemoveContainer" containerID="a504f08d0857b1ebc259d1c40b4d742776c5862ffea29537690e5b99b002d57e" Nov 24 22:00:48 crc kubenswrapper[4801]: I1124 22:00:48.947243 4801 scope.go:117] "RemoveContainer" containerID="a1e390861496eb2af8c127ab93d2628d7fc6250a88cb9eff1fbe502cf4646791" Nov 24 22:00:52 crc kubenswrapper[4801]: I1124 22:00:52.371026 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:52 crc kubenswrapper[4801]: I1124 22:00:52.451059 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:52 crc kubenswrapper[4801]: I1124 22:00:52.624715 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zznsx"] Nov 24 22:00:53 crc kubenswrapper[4801]: I1124 22:00:53.869338 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zznsx" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="registry-server" containerID="cri-o://d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26" gracePeriod=2 Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.460639 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.558732 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72fx4\" (UniqueName: \"kubernetes.io/projected/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-kube-api-access-72fx4\") pod \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.558888 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-utilities\") pod \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.559027 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-catalog-content\") pod \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\" (UID: \"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697\") " Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.560073 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-utilities" (OuterVolumeSpecName: "utilities") pod "f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" (UID: "f7b10ab0-e35d-4f9c-9a0a-651dea3cf697"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.560448 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.569976 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-kube-api-access-72fx4" (OuterVolumeSpecName: "kube-api-access-72fx4") pod "f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" (UID: "f7b10ab0-e35d-4f9c-9a0a-651dea3cf697"). InnerVolumeSpecName "kube-api-access-72fx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.608434 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" (UID: "f7b10ab0-e35d-4f9c-9a0a-651dea3cf697"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.663576 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72fx4\" (UniqueName: \"kubernetes.io/projected/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-kube-api-access-72fx4\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.663620 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.886814 4801 generic.go:334] "Generic (PLEG): container finished" podID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerID="d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26" exitCode=0 Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.886911 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerDied","Data":"d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26"} Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.886991 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zznsx" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.887013 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zznsx" event={"ID":"f7b10ab0-e35d-4f9c-9a0a-651dea3cf697","Type":"ContainerDied","Data":"dc8e4850bc1e5e3032628af0702fe68659beb3a9956c6937825e6d1619072c3d"} Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.887065 4801 scope.go:117] "RemoveContainer" containerID="d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.930922 4801 scope.go:117] "RemoveContainer" containerID="2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0" Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.932875 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zznsx"] Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.947870 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zznsx"] Nov 24 22:00:54 crc kubenswrapper[4801]: I1124 22:00:54.974073 4801 scope.go:117] "RemoveContainer" containerID="587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9" Nov 24 22:00:55 crc kubenswrapper[4801]: I1124 22:00:55.052108 4801 scope.go:117] "RemoveContainer" containerID="d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26" Nov 24 22:00:55 crc kubenswrapper[4801]: E1124 22:00:55.053584 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26\": container with ID starting with d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26 not found: ID does not exist" containerID="d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26" Nov 24 22:00:55 crc kubenswrapper[4801]: I1124 22:00:55.053669 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26"} err="failed to get container status \"d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26\": rpc error: code = NotFound desc = could not find container \"d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26\": container with ID starting with d1d0e2b634b0b5db828706e51382fe2329c1a1a2cee6631f9ea94dfc3ea5dc26 not found: ID does not exist" Nov 24 22:00:55 crc kubenswrapper[4801]: I1124 22:00:55.053720 4801 scope.go:117] "RemoveContainer" containerID="2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0" Nov 24 22:00:55 crc kubenswrapper[4801]: E1124 22:00:55.054321 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0\": container with ID starting with 2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0 not found: ID does not exist" containerID="2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0" Nov 24 22:00:55 crc kubenswrapper[4801]: I1124 22:00:55.054408 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0"} err="failed to get container status \"2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0\": rpc error: code = NotFound desc = could not find container \"2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0\": container with ID starting with 2bcfb548897841723d857ee20aeb65d60cec214f7005f71707a098b1b515b7d0 not found: ID does not exist" Nov 24 22:00:55 crc kubenswrapper[4801]: I1124 22:00:55.054442 4801 scope.go:117] "RemoveContainer" containerID="587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9" Nov 24 22:00:55 crc kubenswrapper[4801]: E1124 22:00:55.055069 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9\": container with ID starting with 587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9 not found: ID does not exist" containerID="587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9" Nov 24 22:00:55 crc kubenswrapper[4801]: I1124 22:00:55.055140 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9"} err="failed to get container status \"587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9\": rpc error: code = NotFound desc = could not find container \"587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9\": container with ID starting with 587fe2657be022f99f58da7ecc9a120f9cc9b2d88a79fe29ab5b546cc577ada9 not found: ID does not exist" Nov 24 22:00:56 crc kubenswrapper[4801]: I1124 22:00:56.682932 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" path="/var/lib/kubelet/pods/f7b10ab0-e35d-4f9c-9a0a-651dea3cf697/volumes" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.170951 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400361-pbxgd"] Nov 24 22:01:00 crc kubenswrapper[4801]: E1124 22:01:00.172165 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="extract-utilities" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.172183 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="extract-utilities" Nov 24 22:01:00 crc kubenswrapper[4801]: E1124 22:01:00.172223 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="registry-server" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.172233 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="registry-server" Nov 24 22:01:00 crc kubenswrapper[4801]: E1124 22:01:00.172260 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="extract-content" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.172268 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="extract-content" Nov 24 22:01:00 crc kubenswrapper[4801]: E1124 22:01:00.172290 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb36beab-725c-4fa1-a960-47e85a2554a7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.172299 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb36beab-725c-4fa1-a960-47e85a2554a7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.172611 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b10ab0-e35d-4f9c-9a0a-651dea3cf697" containerName="registry-server" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.172648 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb36beab-725c-4fa1-a960-47e85a2554a7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.174019 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.200797 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-pbxgd"] Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.248614 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-combined-ca-bundle\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.248664 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpr4t\" (UniqueName: \"kubernetes.io/projected/33464b9f-95a5-4b35-90fd-c382cb6899e2-kube-api-access-zpr4t\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.248686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-fernet-keys\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.248911 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-config-data\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.351089 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-combined-ca-bundle\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.351182 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpr4t\" (UniqueName: \"kubernetes.io/projected/33464b9f-95a5-4b35-90fd-c382cb6899e2-kube-api-access-zpr4t\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.351219 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-fernet-keys\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.351359 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-config-data\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.360225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-combined-ca-bundle\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.361556 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-config-data\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.366301 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-fernet-keys\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.372624 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpr4t\" (UniqueName: \"kubernetes.io/projected/33464b9f-95a5-4b35-90fd-c382cb6899e2-kube-api-access-zpr4t\") pod \"keystone-cron-29400361-pbxgd\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:00 crc kubenswrapper[4801]: I1124 22:01:00.500642 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:01 crc kubenswrapper[4801]: I1124 22:01:01.022438 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-pbxgd"] Nov 24 22:01:01 crc kubenswrapper[4801]: I1124 22:01:01.972968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-pbxgd" event={"ID":"33464b9f-95a5-4b35-90fd-c382cb6899e2","Type":"ContainerStarted","Data":"83ffcad02110c10233e8bfea1c041232388f43b9352067725792b62aa934b65b"} Nov 24 22:01:01 crc kubenswrapper[4801]: I1124 22:01:01.973626 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-pbxgd" event={"ID":"33464b9f-95a5-4b35-90fd-c382cb6899e2","Type":"ContainerStarted","Data":"4e3cab7e144a52b4a610bdf6acbba3d08a5537971f1c284a0811605a7cc9e8cc"} Nov 24 22:01:02 crc kubenswrapper[4801]: I1124 22:01:02.005582 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400361-pbxgd" podStartSLOduration=2.00556047 podStartE2EDuration="2.00556047s" podCreationTimestamp="2025-11-24 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:01:01.996412656 +0000 UTC m=+3234.078999346" watchObservedRunningTime="2025-11-24 22:01:02.00556047 +0000 UTC m=+3234.088147150" Nov 24 22:01:05 crc kubenswrapper[4801]: I1124 22:01:05.024126 4801 generic.go:334] "Generic (PLEG): container finished" podID="33464b9f-95a5-4b35-90fd-c382cb6899e2" containerID="83ffcad02110c10233e8bfea1c041232388f43b9352067725792b62aa934b65b" exitCode=0 Nov 24 22:01:05 crc kubenswrapper[4801]: I1124 22:01:05.024253 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-pbxgd" event={"ID":"33464b9f-95a5-4b35-90fd-c382cb6899e2","Type":"ContainerDied","Data":"83ffcad02110c10233e8bfea1c041232388f43b9352067725792b62aa934b65b"} Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.483701 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.570496 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpr4t\" (UniqueName: \"kubernetes.io/projected/33464b9f-95a5-4b35-90fd-c382cb6899e2-kube-api-access-zpr4t\") pod \"33464b9f-95a5-4b35-90fd-c382cb6899e2\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.570780 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-fernet-keys\") pod \"33464b9f-95a5-4b35-90fd-c382cb6899e2\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.570847 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-combined-ca-bundle\") pod \"33464b9f-95a5-4b35-90fd-c382cb6899e2\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.571267 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-config-data\") pod \"33464b9f-95a5-4b35-90fd-c382cb6899e2\" (UID: \"33464b9f-95a5-4b35-90fd-c382cb6899e2\") " Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.577656 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "33464b9f-95a5-4b35-90fd-c382cb6899e2" (UID: "33464b9f-95a5-4b35-90fd-c382cb6899e2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.580693 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33464b9f-95a5-4b35-90fd-c382cb6899e2-kube-api-access-zpr4t" (OuterVolumeSpecName: "kube-api-access-zpr4t") pod "33464b9f-95a5-4b35-90fd-c382cb6899e2" (UID: "33464b9f-95a5-4b35-90fd-c382cb6899e2"). InnerVolumeSpecName "kube-api-access-zpr4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.639287 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-config-data" (OuterVolumeSpecName: "config-data") pod "33464b9f-95a5-4b35-90fd-c382cb6899e2" (UID: "33464b9f-95a5-4b35-90fd-c382cb6899e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.641702 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33464b9f-95a5-4b35-90fd-c382cb6899e2" (UID: "33464b9f-95a5-4b35-90fd-c382cb6899e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.675302 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.675332 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpr4t\" (UniqueName: \"kubernetes.io/projected/33464b9f-95a5-4b35-90fd-c382cb6899e2-kube-api-access-zpr4t\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.675342 4801 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:06 crc kubenswrapper[4801]: I1124 22:01:06.675351 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33464b9f-95a5-4b35-90fd-c382cb6899e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4801]: I1124 22:01:07.054558 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-pbxgd" event={"ID":"33464b9f-95a5-4b35-90fd-c382cb6899e2","Type":"ContainerDied","Data":"4e3cab7e144a52b4a610bdf6acbba3d08a5537971f1c284a0811605a7cc9e8cc"} Nov 24 22:01:07 crc kubenswrapper[4801]: I1124 22:01:07.054858 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e3cab7e144a52b4a610bdf6acbba3d08a5537971f1c284a0811605a7cc9e8cc" Nov 24 22:01:07 crc kubenswrapper[4801]: I1124 22:01:07.054612 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-pbxgd" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.129564 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-psvnq"] Nov 24 22:02:10 crc kubenswrapper[4801]: E1124 22:02:10.130999 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33464b9f-95a5-4b35-90fd-c382cb6899e2" containerName="keystone-cron" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.131024 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="33464b9f-95a5-4b35-90fd-c382cb6899e2" containerName="keystone-cron" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.131480 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="33464b9f-95a5-4b35-90fd-c382cb6899e2" containerName="keystone-cron" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.134929 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.163014 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psvnq"] Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.228618 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvlz\" (UniqueName: \"kubernetes.io/projected/3437d745-3e40-4005-9166-f280552e89c3-kube-api-access-bbvlz\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.228845 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-utilities\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.228909 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-catalog-content\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.333730 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-utilities\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.333858 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-catalog-content\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.334082 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvlz\" (UniqueName: \"kubernetes.io/projected/3437d745-3e40-4005-9166-f280552e89c3-kube-api-access-bbvlz\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.335170 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-utilities\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.335308 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-catalog-content\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.359332 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvlz\" (UniqueName: \"kubernetes.io/projected/3437d745-3e40-4005-9166-f280552e89c3-kube-api-access-bbvlz\") pod \"redhat-operators-psvnq\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:10 crc kubenswrapper[4801]: I1124 22:02:10.465786 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:11 crc kubenswrapper[4801]: I1124 22:02:11.066824 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psvnq"] Nov 24 22:02:12 crc kubenswrapper[4801]: I1124 22:02:12.007710 4801 generic.go:334] "Generic (PLEG): container finished" podID="3437d745-3e40-4005-9166-f280552e89c3" containerID="e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea" exitCode=0 Nov 24 22:02:12 crc kubenswrapper[4801]: I1124 22:02:12.008043 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerDied","Data":"e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea"} Nov 24 22:02:12 crc kubenswrapper[4801]: I1124 22:02:12.008294 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerStarted","Data":"4365f18fae6ee68b16d39cde34e4234aabec85db48a21e465ba0c34ed2355063"} Nov 24 22:02:14 crc kubenswrapper[4801]: I1124 22:02:14.037643 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerStarted","Data":"788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f"} Nov 24 22:02:16 crc kubenswrapper[4801]: E1124 22:02:16.149604 4801 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.83:32798->38.102.83.83:34545: read tcp 38.102.83.83:32798->38.102.83.83:34545: read: connection reset by peer Nov 24 22:02:17 crc kubenswrapper[4801]: I1124 22:02:17.083795 4801 generic.go:334] "Generic (PLEG): container finished" podID="3437d745-3e40-4005-9166-f280552e89c3" containerID="788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f" exitCode=0 Nov 24 22:02:17 crc kubenswrapper[4801]: I1124 22:02:17.084477 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerDied","Data":"788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f"} Nov 24 22:02:18 crc kubenswrapper[4801]: I1124 22:02:18.103528 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerStarted","Data":"f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659"} Nov 24 22:02:18 crc kubenswrapper[4801]: I1124 22:02:18.131357 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-psvnq" podStartSLOduration=2.585510472 podStartE2EDuration="8.131331755s" podCreationTimestamp="2025-11-24 22:02:10 +0000 UTC" firstStartedPulling="2025-11-24 22:02:12.010976818 +0000 UTC m=+3304.093563488" lastFinishedPulling="2025-11-24 22:02:17.556798081 +0000 UTC m=+3309.639384771" observedRunningTime="2025-11-24 22:02:18.126543926 +0000 UTC m=+3310.209130626" watchObservedRunningTime="2025-11-24 22:02:18.131331755 +0000 UTC m=+3310.213918435" Nov 24 22:02:20 crc kubenswrapper[4801]: I1124 22:02:20.466243 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:20 crc kubenswrapper[4801]: I1124 22:02:20.467036 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:21 crc kubenswrapper[4801]: I1124 22:02:21.564522 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-psvnq" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="registry-server" probeResult="failure" output=< Nov 24 22:02:21 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:02:21 crc kubenswrapper[4801]: > Nov 24 22:02:24 crc kubenswrapper[4801]: I1124 22:02:24.321868 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:02:24 crc kubenswrapper[4801]: I1124 22:02:24.322245 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:02:30 crc kubenswrapper[4801]: I1124 22:02:30.568419 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:30 crc kubenswrapper[4801]: I1124 22:02:30.639006 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:30 crc kubenswrapper[4801]: I1124 22:02:30.828834 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psvnq"] Nov 24 22:02:32 crc kubenswrapper[4801]: I1124 22:02:32.341610 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-psvnq" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="registry-server" containerID="cri-o://f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659" gracePeriod=2 Nov 24 22:02:32 crc kubenswrapper[4801]: I1124 22:02:32.943855 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.108093 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvlz\" (UniqueName: \"kubernetes.io/projected/3437d745-3e40-4005-9166-f280552e89c3-kube-api-access-bbvlz\") pod \"3437d745-3e40-4005-9166-f280552e89c3\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.108333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-catalog-content\") pod \"3437d745-3e40-4005-9166-f280552e89c3\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.108405 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-utilities\") pod \"3437d745-3e40-4005-9166-f280552e89c3\" (UID: \"3437d745-3e40-4005-9166-f280552e89c3\") " Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.109287 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-utilities" (OuterVolumeSpecName: "utilities") pod "3437d745-3e40-4005-9166-f280552e89c3" (UID: "3437d745-3e40-4005-9166-f280552e89c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.120458 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3437d745-3e40-4005-9166-f280552e89c3-kube-api-access-bbvlz" (OuterVolumeSpecName: "kube-api-access-bbvlz") pod "3437d745-3e40-4005-9166-f280552e89c3" (UID: "3437d745-3e40-4005-9166-f280552e89c3"). InnerVolumeSpecName "kube-api-access-bbvlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.209195 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3437d745-3e40-4005-9166-f280552e89c3" (UID: "3437d745-3e40-4005-9166-f280552e89c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.211533 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvlz\" (UniqueName: \"kubernetes.io/projected/3437d745-3e40-4005-9166-f280552e89c3-kube-api-access-bbvlz\") on node \"crc\" DevicePath \"\"" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.211572 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.211584 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3437d745-3e40-4005-9166-f280552e89c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.355719 4801 generic.go:334] "Generic (PLEG): container finished" podID="3437d745-3e40-4005-9166-f280552e89c3" containerID="f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659" exitCode=0 Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.355779 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psvnq" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.355778 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerDied","Data":"f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659"} Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.355837 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psvnq" event={"ID":"3437d745-3e40-4005-9166-f280552e89c3","Type":"ContainerDied","Data":"4365f18fae6ee68b16d39cde34e4234aabec85db48a21e465ba0c34ed2355063"} Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.355859 4801 scope.go:117] "RemoveContainer" containerID="f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.391071 4801 scope.go:117] "RemoveContainer" containerID="788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.402795 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psvnq"] Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.414641 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-psvnq"] Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.434807 4801 scope.go:117] "RemoveContainer" containerID="e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.493337 4801 scope.go:117] "RemoveContainer" containerID="f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659" Nov 24 22:02:33 crc kubenswrapper[4801]: E1124 22:02:33.494405 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659\": container with ID starting with f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659 not found: ID does not exist" containerID="f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.494467 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659"} err="failed to get container status \"f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659\": rpc error: code = NotFound desc = could not find container \"f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659\": container with ID starting with f364d242dae6e335612c278beddf62cd294cebb2c9805ddc125ad1ef2cd5f659 not found: ID does not exist" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.494506 4801 scope.go:117] "RemoveContainer" containerID="788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f" Nov 24 22:02:33 crc kubenswrapper[4801]: E1124 22:02:33.495148 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f\": container with ID starting with 788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f not found: ID does not exist" containerID="788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.495207 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f"} err="failed to get container status \"788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f\": rpc error: code = NotFound desc = could not find container \"788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f\": container with ID starting with 788a442f2de278e048910a1d1c007cf4fd6c99b4b98654202cebd4db62bdc77f not found: ID does not exist" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.495248 4801 scope.go:117] "RemoveContainer" containerID="e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea" Nov 24 22:02:33 crc kubenswrapper[4801]: E1124 22:02:33.495653 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea\": container with ID starting with e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea not found: ID does not exist" containerID="e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea" Nov 24 22:02:33 crc kubenswrapper[4801]: I1124 22:02:33.495685 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea"} err="failed to get container status \"e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea\": rpc error: code = NotFound desc = could not find container \"e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea\": container with ID starting with e2fd5b3684df8a8353b04ad11cfecda017db324f5b3f5d0f904d9703212024ea not found: ID does not exist" Nov 24 22:02:34 crc kubenswrapper[4801]: I1124 22:02:34.678922 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3437d745-3e40-4005-9166-f280552e89c3" path="/var/lib/kubelet/pods/3437d745-3e40-4005-9166-f280552e89c3/volumes" Nov 24 22:02:54 crc kubenswrapper[4801]: I1124 22:02:54.320051 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:02:54 crc kubenswrapper[4801]: I1124 22:02:54.320769 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:02:59 crc kubenswrapper[4801]: I1124 22:02:59.989206 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h65bl"] Nov 24 22:02:59 crc kubenswrapper[4801]: E1124 22:02:59.990562 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="extract-content" Nov 24 22:02:59 crc kubenswrapper[4801]: I1124 22:02:59.990586 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="extract-content" Nov 24 22:02:59 crc kubenswrapper[4801]: E1124 22:02:59.990643 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="registry-server" Nov 24 22:02:59 crc kubenswrapper[4801]: I1124 22:02:59.990652 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="registry-server" Nov 24 22:02:59 crc kubenswrapper[4801]: E1124 22:02:59.990701 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="extract-utilities" Nov 24 22:02:59 crc kubenswrapper[4801]: I1124 22:02:59.990711 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="extract-utilities" Nov 24 22:02:59 crc kubenswrapper[4801]: I1124 22:02:59.991050 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3437d745-3e40-4005-9166-f280552e89c3" containerName="registry-server" Nov 24 22:02:59 crc kubenswrapper[4801]: I1124 22:02:59.993500 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.009990 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h65bl"] Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.167768 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff644\" (UniqueName: \"kubernetes.io/projected/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-kube-api-access-ff644\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.167920 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-catalog-content\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.168352 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-utilities\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.272218 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff644\" (UniqueName: \"kubernetes.io/projected/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-kube-api-access-ff644\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.272534 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-catalog-content\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.272665 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-utilities\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.273179 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-catalog-content\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.273253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-utilities\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.296221 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff644\" (UniqueName: \"kubernetes.io/projected/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-kube-api-access-ff644\") pod \"certified-operators-h65bl\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.331039 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:00 crc kubenswrapper[4801]: W1124 22:03:00.849940 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605c2300_10ab_4cc7_baa3_7bb5c6a4e6a9.slice/crio-314c2b3c8508c1ec0cd9c492e24a93d5b9a9d754b2f8b5159c702900a5f05378 WatchSource:0}: Error finding container 314c2b3c8508c1ec0cd9c492e24a93d5b9a9d754b2f8b5159c702900a5f05378: Status 404 returned error can't find the container with id 314c2b3c8508c1ec0cd9c492e24a93d5b9a9d754b2f8b5159c702900a5f05378 Nov 24 22:03:00 crc kubenswrapper[4801]: I1124 22:03:00.850489 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h65bl"] Nov 24 22:03:01 crc kubenswrapper[4801]: I1124 22:03:01.733445 4801 generic.go:334] "Generic (PLEG): container finished" podID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerID="1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49" exitCode=0 Nov 24 22:03:01 crc kubenswrapper[4801]: I1124 22:03:01.733517 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerDied","Data":"1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49"} Nov 24 22:03:01 crc kubenswrapper[4801]: I1124 22:03:01.733912 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerStarted","Data":"314c2b3c8508c1ec0cd9c492e24a93d5b9a9d754b2f8b5159c702900a5f05378"} Nov 24 22:03:01 crc kubenswrapper[4801]: I1124 22:03:01.737231 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:03:02 crc kubenswrapper[4801]: I1124 22:03:02.746861 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerStarted","Data":"cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339"} Nov 24 22:03:03 crc kubenswrapper[4801]: I1124 22:03:03.761431 4801 generic.go:334] "Generic (PLEG): container finished" podID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerID="cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339" exitCode=0 Nov 24 22:03:03 crc kubenswrapper[4801]: I1124 22:03:03.761552 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerDied","Data":"cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339"} Nov 24 22:03:04 crc kubenswrapper[4801]: I1124 22:03:04.778519 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerStarted","Data":"cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046"} Nov 24 22:03:04 crc kubenswrapper[4801]: I1124 22:03:04.796334 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h65bl" podStartSLOduration=3.182820525 podStartE2EDuration="5.796316259s" podCreationTimestamp="2025-11-24 22:02:59 +0000 UTC" firstStartedPulling="2025-11-24 22:03:01.736858933 +0000 UTC m=+3353.819445643" lastFinishedPulling="2025-11-24 22:03:04.350354707 +0000 UTC m=+3356.432941377" observedRunningTime="2025-11-24 22:03:04.7950893 +0000 UTC m=+3356.877675970" watchObservedRunningTime="2025-11-24 22:03:04.796316259 +0000 UTC m=+3356.878902929" Nov 24 22:03:10 crc kubenswrapper[4801]: I1124 22:03:10.331179 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:10 crc kubenswrapper[4801]: I1124 22:03:10.331755 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:10 crc kubenswrapper[4801]: I1124 22:03:10.384274 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:10 crc kubenswrapper[4801]: I1124 22:03:10.957229 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:11 crc kubenswrapper[4801]: I1124 22:03:11.044674 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h65bl"] Nov 24 22:03:12 crc kubenswrapper[4801]: I1124 22:03:12.895004 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h65bl" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="registry-server" containerID="cri-o://cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046" gracePeriod=2 Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.474967 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.569379 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff644\" (UniqueName: \"kubernetes.io/projected/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-kube-api-access-ff644\") pod \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.569627 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-catalog-content\") pod \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.570135 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-utilities\") pod \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\" (UID: \"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9\") " Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.572113 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-utilities" (OuterVolumeSpecName: "utilities") pod "605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" (UID: "605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.581264 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-kube-api-access-ff644" (OuterVolumeSpecName: "kube-api-access-ff644") pod "605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" (UID: "605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9"). InnerVolumeSpecName "kube-api-access-ff644". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.674165 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff644\" (UniqueName: \"kubernetes.io/projected/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-kube-api-access-ff644\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.674196 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.909166 4801 generic.go:334] "Generic (PLEG): container finished" podID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerID="cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046" exitCode=0 Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.909286 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h65bl" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.909279 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerDied","Data":"cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046"} Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.909655 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h65bl" event={"ID":"605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9","Type":"ContainerDied","Data":"314c2b3c8508c1ec0cd9c492e24a93d5b9a9d754b2f8b5159c702900a5f05378"} Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.909685 4801 scope.go:117] "RemoveContainer" containerID="cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.940840 4801 scope.go:117] "RemoveContainer" containerID="cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339" Nov 24 22:03:13 crc kubenswrapper[4801]: I1124 22:03:13.984479 4801 scope.go:117] "RemoveContainer" containerID="1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.031152 4801 scope.go:117] "RemoveContainer" containerID="cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046" Nov 24 22:03:14 crc kubenswrapper[4801]: E1124 22:03:14.031782 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046\": container with ID starting with cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046 not found: ID does not exist" containerID="cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.031840 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046"} err="failed to get container status \"cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046\": rpc error: code = NotFound desc = could not find container \"cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046\": container with ID starting with cfa12d1dbcfb60e305d60815c52bbfc33c7c2df9ecda53baf48d4e96595e1046 not found: ID does not exist" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.031877 4801 scope.go:117] "RemoveContainer" containerID="cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339" Nov 24 22:03:14 crc kubenswrapper[4801]: E1124 22:03:14.032445 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339\": container with ID starting with cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339 not found: ID does not exist" containerID="cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.032561 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339"} err="failed to get container status \"cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339\": rpc error: code = NotFound desc = could not find container \"cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339\": container with ID starting with cbe6f8a10bdba52ce7c330ef80932f563f28058443702e31d7d9fde37a13a339 not found: ID does not exist" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.032676 4801 scope.go:117] "RemoveContainer" containerID="1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49" Nov 24 22:03:14 crc kubenswrapper[4801]: E1124 22:03:14.033176 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49\": container with ID starting with 1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49 not found: ID does not exist" containerID="1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.033211 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49"} err="failed to get container status \"1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49\": rpc error: code = NotFound desc = could not find container \"1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49\": container with ID starting with 1d2c40a4b8dff57266b48b21ac0993b5b4b2845df6ec9a77024cadd8db14ed49 not found: ID does not exist" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.161839 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" (UID: "605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.190042 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.255771 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h65bl"] Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.269198 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h65bl"] Nov 24 22:03:14 crc kubenswrapper[4801]: I1124 22:03:14.687047 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" path="/var/lib/kubelet/pods/605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9/volumes" Nov 24 22:03:24 crc kubenswrapper[4801]: I1124 22:03:24.319985 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:03:24 crc kubenswrapper[4801]: I1124 22:03:24.320656 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:03:24 crc kubenswrapper[4801]: I1124 22:03:24.320725 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:03:24 crc kubenswrapper[4801]: I1124 22:03:24.321869 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"115186694d7815471b4ebd7ee5e70560cfae1082fcf87013c124f5edc6da064b"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:03:24 crc kubenswrapper[4801]: I1124 22:03:24.321946 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://115186694d7815471b4ebd7ee5e70560cfae1082fcf87013c124f5edc6da064b" gracePeriod=600 Nov 24 22:03:25 crc kubenswrapper[4801]: I1124 22:03:25.067787 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="115186694d7815471b4ebd7ee5e70560cfae1082fcf87013c124f5edc6da064b" exitCode=0 Nov 24 22:03:25 crc kubenswrapper[4801]: I1124 22:03:25.067876 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"115186694d7815471b4ebd7ee5e70560cfae1082fcf87013c124f5edc6da064b"} Nov 24 22:03:25 crc kubenswrapper[4801]: I1124 22:03:25.068106 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac"} Nov 24 22:03:25 crc kubenswrapper[4801]: I1124 22:03:25.068136 4801 scope.go:117] "RemoveContainer" containerID="690e4fef9b5b71c3f4b5fd2a59c85b497d8dedb71823937d5d56cd4aa7f52260" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.267145 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6hm7"] Nov 24 22:05:15 crc kubenswrapper[4801]: E1124 22:05:15.269162 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="extract-utilities" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.269205 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="extract-utilities" Nov 24 22:05:15 crc kubenswrapper[4801]: E1124 22:05:15.269269 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="registry-server" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.269287 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="registry-server" Nov 24 22:05:15 crc kubenswrapper[4801]: E1124 22:05:15.269329 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="extract-content" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.269346 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="extract-content" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.269971 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="605c2300-10ab-4cc7-baa3-7bb5c6a4e6a9" containerName="registry-server" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.274132 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.285771 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6hm7"] Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.344875 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrvk\" (UniqueName: \"kubernetes.io/projected/2d4f9482-9d99-4faa-a813-d479a2186d30-kube-api-access-lqrvk\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.345475 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-utilities\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.345552 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-catalog-content\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.448330 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-utilities\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.448412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-catalog-content\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.448545 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrvk\" (UniqueName: \"kubernetes.io/projected/2d4f9482-9d99-4faa-a813-d479a2186d30-kube-api-access-lqrvk\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.448911 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-utilities\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.448991 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-catalog-content\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.468901 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrvk\" (UniqueName: \"kubernetes.io/projected/2d4f9482-9d99-4faa-a813-d479a2186d30-kube-api-access-lqrvk\") pod \"redhat-marketplace-g6hm7\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:15 crc kubenswrapper[4801]: I1124 22:05:15.607055 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:16 crc kubenswrapper[4801]: W1124 22:05:16.138599 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d4f9482_9d99_4faa_a813_d479a2186d30.slice/crio-1bc572e70d173ed2d5553594337d959329bc4a8cf2306c50a2efa87a810a71e6 WatchSource:0}: Error finding container 1bc572e70d173ed2d5553594337d959329bc4a8cf2306c50a2efa87a810a71e6: Status 404 returned error can't find the container with id 1bc572e70d173ed2d5553594337d959329bc4a8cf2306c50a2efa87a810a71e6 Nov 24 22:05:16 crc kubenswrapper[4801]: I1124 22:05:16.141781 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6hm7"] Nov 24 22:05:16 crc kubenswrapper[4801]: I1124 22:05:16.533662 4801 generic.go:334] "Generic (PLEG): container finished" podID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerID="3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77" exitCode=0 Nov 24 22:05:16 crc kubenswrapper[4801]: I1124 22:05:16.533810 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6hm7" event={"ID":"2d4f9482-9d99-4faa-a813-d479a2186d30","Type":"ContainerDied","Data":"3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77"} Nov 24 22:05:16 crc kubenswrapper[4801]: I1124 22:05:16.534628 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6hm7" event={"ID":"2d4f9482-9d99-4faa-a813-d479a2186d30","Type":"ContainerStarted","Data":"1bc572e70d173ed2d5553594337d959329bc4a8cf2306c50a2efa87a810a71e6"} Nov 24 22:05:18 crc kubenswrapper[4801]: I1124 22:05:18.571814 4801 generic.go:334] "Generic (PLEG): container finished" podID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerID="c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95" exitCode=0 Nov 24 22:05:18 crc kubenswrapper[4801]: I1124 22:05:18.571962 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6hm7" event={"ID":"2d4f9482-9d99-4faa-a813-d479a2186d30","Type":"ContainerDied","Data":"c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95"} Nov 24 22:05:19 crc kubenswrapper[4801]: I1124 22:05:19.595721 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6hm7" event={"ID":"2d4f9482-9d99-4faa-a813-d479a2186d30","Type":"ContainerStarted","Data":"0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a"} Nov 24 22:05:19 crc kubenswrapper[4801]: I1124 22:05:19.622699 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6hm7" podStartSLOduration=2.111745377 podStartE2EDuration="4.62267176s" podCreationTimestamp="2025-11-24 22:05:15 +0000 UTC" firstStartedPulling="2025-11-24 22:05:16.538665038 +0000 UTC m=+3488.621251718" lastFinishedPulling="2025-11-24 22:05:19.049591431 +0000 UTC m=+3491.132178101" observedRunningTime="2025-11-24 22:05:19.610327276 +0000 UTC m=+3491.692913966" watchObservedRunningTime="2025-11-24 22:05:19.62267176 +0000 UTC m=+3491.705258430" Nov 24 22:05:24 crc kubenswrapper[4801]: I1124 22:05:24.320476 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:05:24 crc kubenswrapper[4801]: I1124 22:05:24.321251 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:05:25 crc kubenswrapper[4801]: I1124 22:05:25.607776 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:25 crc kubenswrapper[4801]: I1124 22:05:25.607851 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:25 crc kubenswrapper[4801]: I1124 22:05:25.671143 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:25 crc kubenswrapper[4801]: I1124 22:05:25.745905 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:25 crc kubenswrapper[4801]: I1124 22:05:25.912051 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6hm7"] Nov 24 22:05:27 crc kubenswrapper[4801]: I1124 22:05:27.716635 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6hm7" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="registry-server" containerID="cri-o://0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a" gracePeriod=2 Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.286327 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.383612 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqrvk\" (UniqueName: \"kubernetes.io/projected/2d4f9482-9d99-4faa-a813-d479a2186d30-kube-api-access-lqrvk\") pod \"2d4f9482-9d99-4faa-a813-d479a2186d30\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.383978 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-utilities\") pod \"2d4f9482-9d99-4faa-a813-d479a2186d30\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.384154 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-catalog-content\") pod \"2d4f9482-9d99-4faa-a813-d479a2186d30\" (UID: \"2d4f9482-9d99-4faa-a813-d479a2186d30\") " Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.384711 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-utilities" (OuterVolumeSpecName: "utilities") pod "2d4f9482-9d99-4faa-a813-d479a2186d30" (UID: "2d4f9482-9d99-4faa-a813-d479a2186d30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.385543 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.391690 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4f9482-9d99-4faa-a813-d479a2186d30-kube-api-access-lqrvk" (OuterVolumeSpecName: "kube-api-access-lqrvk") pod "2d4f9482-9d99-4faa-a813-d479a2186d30" (UID: "2d4f9482-9d99-4faa-a813-d479a2186d30"). InnerVolumeSpecName "kube-api-access-lqrvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.410590 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d4f9482-9d99-4faa-a813-d479a2186d30" (UID: "2d4f9482-9d99-4faa-a813-d479a2186d30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.488192 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqrvk\" (UniqueName: \"kubernetes.io/projected/2d4f9482-9d99-4faa-a813-d479a2186d30-kube-api-access-lqrvk\") on node \"crc\" DevicePath \"\"" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.488552 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4f9482-9d99-4faa-a813-d479a2186d30-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.732145 4801 generic.go:334] "Generic (PLEG): container finished" podID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerID="0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a" exitCode=0 Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.732214 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6hm7" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.732211 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6hm7" event={"ID":"2d4f9482-9d99-4faa-a813-d479a2186d30","Type":"ContainerDied","Data":"0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a"} Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.732382 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6hm7" event={"ID":"2d4f9482-9d99-4faa-a813-d479a2186d30","Type":"ContainerDied","Data":"1bc572e70d173ed2d5553594337d959329bc4a8cf2306c50a2efa87a810a71e6"} Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.732419 4801 scope.go:117] "RemoveContainer" containerID="0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.765248 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6hm7"] Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.777037 4801 scope.go:117] "RemoveContainer" containerID="c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.781172 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6hm7"] Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.808759 4801 scope.go:117] "RemoveContainer" containerID="3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.873039 4801 scope.go:117] "RemoveContainer" containerID="0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a" Nov 24 22:05:28 crc kubenswrapper[4801]: E1124 22:05:28.873634 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a\": container with ID starting with 0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a not found: ID does not exist" containerID="0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.873687 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a"} err="failed to get container status \"0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a\": rpc error: code = NotFound desc = could not find container \"0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a\": container with ID starting with 0064afc053509f5684bbead852ddb95e837ca38b9e90956d5e7a3c8ba708ff4a not found: ID does not exist" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.873721 4801 scope.go:117] "RemoveContainer" containerID="c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95" Nov 24 22:05:28 crc kubenswrapper[4801]: E1124 22:05:28.874403 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95\": container with ID starting with c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95 not found: ID does not exist" containerID="c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.874492 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95"} err="failed to get container status \"c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95\": rpc error: code = NotFound desc = could not find container \"c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95\": container with ID starting with c1d0ea8acc78847b252d6df79a14b5c74032526a82313aa4dc2404bdeaa98b95 not found: ID does not exist" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.874534 4801 scope.go:117] "RemoveContainer" containerID="3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77" Nov 24 22:05:28 crc kubenswrapper[4801]: E1124 22:05:28.874997 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77\": container with ID starting with 3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77 not found: ID does not exist" containerID="3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77" Nov 24 22:05:28 crc kubenswrapper[4801]: I1124 22:05:28.875031 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77"} err="failed to get container status \"3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77\": rpc error: code = NotFound desc = could not find container \"3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77\": container with ID starting with 3f5d5e26f769b0fb762bcead4d0b4413b0694c55352259b8b22bc80a2dca8b77 not found: ID does not exist" Nov 24 22:05:30 crc kubenswrapper[4801]: I1124 22:05:30.682817 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" path="/var/lib/kubelet/pods/2d4f9482-9d99-4faa-a813-d479a2186d30/volumes" Nov 24 22:05:54 crc kubenswrapper[4801]: I1124 22:05:54.321980 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:05:54 crc kubenswrapper[4801]: I1124 22:05:54.322721 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.322574 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.323205 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.323275 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.324290 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.324404 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" gracePeriod=600 Nov 24 22:06:24 crc kubenswrapper[4801]: E1124 22:06:24.482512 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.818907 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" exitCode=0 Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.818961 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac"} Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.819004 4801 scope.go:117] "RemoveContainer" containerID="115186694d7815471b4ebd7ee5e70560cfae1082fcf87013c124f5edc6da064b" Nov 24 22:06:24 crc kubenswrapper[4801]: I1124 22:06:24.822433 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:06:24 crc kubenswrapper[4801]: E1124 22:06:24.823750 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:06:38 crc kubenswrapper[4801]: I1124 22:06:38.674768 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:06:38 crc kubenswrapper[4801]: E1124 22:06:38.675347 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:06:49 crc kubenswrapper[4801]: I1124 22:06:49.665346 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:06:49 crc kubenswrapper[4801]: E1124 22:06:49.666559 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:07:01 crc kubenswrapper[4801]: I1124 22:07:01.665356 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:07:01 crc kubenswrapper[4801]: E1124 22:07:01.666612 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:07:12 crc kubenswrapper[4801]: I1124 22:07:12.664743 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:07:12 crc kubenswrapper[4801]: E1124 22:07:12.665782 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:07:26 crc kubenswrapper[4801]: I1124 22:07:26.664673 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:07:26 crc kubenswrapper[4801]: E1124 22:07:26.667025 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:07:38 crc kubenswrapper[4801]: I1124 22:07:38.679027 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:07:38 crc kubenswrapper[4801]: E1124 22:07:38.680327 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:07:50 crc kubenswrapper[4801]: I1124 22:07:50.664562 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:07:50 crc kubenswrapper[4801]: E1124 22:07:50.665659 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:08:01 crc kubenswrapper[4801]: I1124 22:08:01.664617 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:08:01 crc kubenswrapper[4801]: E1124 22:08:01.665493 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:08:16 crc kubenswrapper[4801]: I1124 22:08:16.666484 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:08:16 crc kubenswrapper[4801]: E1124 22:08:16.667541 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:08:27 crc kubenswrapper[4801]: I1124 22:08:27.664433 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:08:27 crc kubenswrapper[4801]: E1124 22:08:27.665432 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:08:39 crc kubenswrapper[4801]: I1124 22:08:39.663977 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:08:39 crc kubenswrapper[4801]: E1124 22:08:39.664969 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:08:53 crc kubenswrapper[4801]: I1124 22:08:53.664460 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:08:53 crc kubenswrapper[4801]: E1124 22:08:53.665190 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:09:04 crc kubenswrapper[4801]: I1124 22:09:04.664333 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:09:04 crc kubenswrapper[4801]: E1124 22:09:04.665180 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:09:15 crc kubenswrapper[4801]: I1124 22:09:15.664588 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:09:15 crc kubenswrapper[4801]: E1124 22:09:15.666242 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:09:30 crc kubenswrapper[4801]: I1124 22:09:30.664877 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:09:30 crc kubenswrapper[4801]: E1124 22:09:30.666000 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:09:41 crc kubenswrapper[4801]: I1124 22:09:41.664916 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:09:41 crc kubenswrapper[4801]: E1124 22:09:41.665793 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:09:56 crc kubenswrapper[4801]: I1124 22:09:56.665134 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:09:56 crc kubenswrapper[4801]: E1124 22:09:56.666232 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:10:11 crc kubenswrapper[4801]: I1124 22:10:11.665122 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:10:11 crc kubenswrapper[4801]: E1124 22:10:11.666101 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:10:26 crc kubenswrapper[4801]: I1124 22:10:26.664465 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:10:26 crc kubenswrapper[4801]: E1124 22:10:26.665227 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.534221 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s664t"] Nov 24 22:10:36 crc kubenswrapper[4801]: E1124 22:10:36.535662 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="extract-content" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.535684 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="extract-content" Nov 24 22:10:36 crc kubenswrapper[4801]: E1124 22:10:36.535699 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="registry-server" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.535707 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="registry-server" Nov 24 22:10:36 crc kubenswrapper[4801]: E1124 22:10:36.535742 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="extract-utilities" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.535751 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="extract-utilities" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.536028 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4f9482-9d99-4faa-a813-d479a2186d30" containerName="registry-server" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.538221 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.571033 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s664t"] Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.673074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-catalog-content\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.673406 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-utilities\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.673614 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtvg\" (UniqueName: \"kubernetes.io/projected/a4979af2-5655-4075-97ee-a3bdea46e31b-kube-api-access-qhtvg\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.775737 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-utilities\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.775910 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtvg\" (UniqueName: \"kubernetes.io/projected/a4979af2-5655-4075-97ee-a3bdea46e31b-kube-api-access-qhtvg\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.775932 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-catalog-content\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.777572 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-utilities\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.778181 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-catalog-content\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.815245 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtvg\" (UniqueName: \"kubernetes.io/projected/a4979af2-5655-4075-97ee-a3bdea46e31b-kube-api-access-qhtvg\") pod \"community-operators-s664t\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:36 crc kubenswrapper[4801]: I1124 22:10:36.866551 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:37 crc kubenswrapper[4801]: I1124 22:10:37.449635 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s664t"] Nov 24 22:10:38 crc kubenswrapper[4801]: I1124 22:10:38.224495 4801 generic.go:334] "Generic (PLEG): container finished" podID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerID="620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a" exitCode=0 Nov 24 22:10:38 crc kubenswrapper[4801]: I1124 22:10:38.224866 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerDied","Data":"620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a"} Nov 24 22:10:38 crc kubenswrapper[4801]: I1124 22:10:38.224906 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerStarted","Data":"2a25494624eb2475d0717ac75eeb5b2c269f3393a9b4f6ccc1237779e56032fa"} Nov 24 22:10:38 crc kubenswrapper[4801]: I1124 22:10:38.228167 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:10:39 crc kubenswrapper[4801]: I1124 22:10:39.237585 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerStarted","Data":"f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2"} Nov 24 22:10:40 crc kubenswrapper[4801]: I1124 22:10:40.665424 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:10:40 crc kubenswrapper[4801]: E1124 22:10:40.666349 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:10:41 crc kubenswrapper[4801]: I1124 22:10:41.271282 4801 generic.go:334] "Generic (PLEG): container finished" podID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerID="f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2" exitCode=0 Nov 24 22:10:41 crc kubenswrapper[4801]: I1124 22:10:41.271336 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerDied","Data":"f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2"} Nov 24 22:10:42 crc kubenswrapper[4801]: I1124 22:10:42.284861 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerStarted","Data":"cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782"} Nov 24 22:10:42 crc kubenswrapper[4801]: I1124 22:10:42.311870 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s664t" podStartSLOduration=2.688565454 podStartE2EDuration="6.311847585s" podCreationTimestamp="2025-11-24 22:10:36 +0000 UTC" firstStartedPulling="2025-11-24 22:10:38.227798949 +0000 UTC m=+3810.310385619" lastFinishedPulling="2025-11-24 22:10:41.85108107 +0000 UTC m=+3813.933667750" observedRunningTime="2025-11-24 22:10:42.300240384 +0000 UTC m=+3814.382827054" watchObservedRunningTime="2025-11-24 22:10:42.311847585 +0000 UTC m=+3814.394434245" Nov 24 22:10:46 crc kubenswrapper[4801]: I1124 22:10:46.867025 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:46 crc kubenswrapper[4801]: I1124 22:10:46.867670 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:46 crc kubenswrapper[4801]: I1124 22:10:46.937840 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:47 crc kubenswrapper[4801]: I1124 22:10:47.430659 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:47 crc kubenswrapper[4801]: I1124 22:10:47.497012 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s664t"] Nov 24 22:10:49 crc kubenswrapper[4801]: I1124 22:10:49.371494 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s664t" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="registry-server" containerID="cri-o://cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782" gracePeriod=2 Nov 24 22:10:49 crc kubenswrapper[4801]: I1124 22:10:49.962978 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.051182 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-utilities\") pod \"a4979af2-5655-4075-97ee-a3bdea46e31b\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.051276 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-catalog-content\") pod \"a4979af2-5655-4075-97ee-a3bdea46e31b\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.051450 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhtvg\" (UniqueName: \"kubernetes.io/projected/a4979af2-5655-4075-97ee-a3bdea46e31b-kube-api-access-qhtvg\") pod \"a4979af2-5655-4075-97ee-a3bdea46e31b\" (UID: \"a4979af2-5655-4075-97ee-a3bdea46e31b\") " Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.052293 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-utilities" (OuterVolumeSpecName: "utilities") pod "a4979af2-5655-4075-97ee-a3bdea46e31b" (UID: "a4979af2-5655-4075-97ee-a3bdea46e31b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.057236 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4979af2-5655-4075-97ee-a3bdea46e31b-kube-api-access-qhtvg" (OuterVolumeSpecName: "kube-api-access-qhtvg") pod "a4979af2-5655-4075-97ee-a3bdea46e31b" (UID: "a4979af2-5655-4075-97ee-a3bdea46e31b"). InnerVolumeSpecName "kube-api-access-qhtvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.114743 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4979af2-5655-4075-97ee-a3bdea46e31b" (UID: "a4979af2-5655-4075-97ee-a3bdea46e31b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.155299 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.155338 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4979af2-5655-4075-97ee-a3bdea46e31b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.155353 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhtvg\" (UniqueName: \"kubernetes.io/projected/a4979af2-5655-4075-97ee-a3bdea46e31b-kube-api-access-qhtvg\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.388873 4801 generic.go:334] "Generic (PLEG): container finished" podID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerID="cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782" exitCode=0 Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.388937 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerDied","Data":"cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782"} Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.389003 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s664t" event={"ID":"a4979af2-5655-4075-97ee-a3bdea46e31b","Type":"ContainerDied","Data":"2a25494624eb2475d0717ac75eeb5b2c269f3393a9b4f6ccc1237779e56032fa"} Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.389001 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s664t" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.389032 4801 scope.go:117] "RemoveContainer" containerID="cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.427308 4801 scope.go:117] "RemoveContainer" containerID="f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.456305 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s664t"] Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.471555 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s664t"] Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.479533 4801 scope.go:117] "RemoveContainer" containerID="620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.537900 4801 scope.go:117] "RemoveContainer" containerID="cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782" Nov 24 22:10:50 crc kubenswrapper[4801]: E1124 22:10:50.538420 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782\": container with ID starting with cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782 not found: ID does not exist" containerID="cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.538458 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782"} err="failed to get container status \"cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782\": rpc error: code = NotFound desc = could not find container \"cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782\": container with ID starting with cd4e7470357654253410fae933282f01ca2e88f3512fa9736e0aa0a13c5dd782 not found: ID does not exist" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.538480 4801 scope.go:117] "RemoveContainer" containerID="f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2" Nov 24 22:10:50 crc kubenswrapper[4801]: E1124 22:10:50.538952 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2\": container with ID starting with f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2 not found: ID does not exist" containerID="f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.538974 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2"} err="failed to get container status \"f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2\": rpc error: code = NotFound desc = could not find container \"f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2\": container with ID starting with f3262311ee4affbad49982287d8ca44fa6e28e4ceb7ebc79679a07f5831d0cc2 not found: ID does not exist" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.538987 4801 scope.go:117] "RemoveContainer" containerID="620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a" Nov 24 22:10:50 crc kubenswrapper[4801]: E1124 22:10:50.539232 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a\": container with ID starting with 620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a not found: ID does not exist" containerID="620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.539251 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a"} err="failed to get container status \"620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a\": rpc error: code = NotFound desc = could not find container \"620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a\": container with ID starting with 620b1e11f014a853dd76f4f700374d4bb9fb6b7bd108622895e1fa3923a23b7a not found: ID does not exist" Nov 24 22:10:50 crc kubenswrapper[4801]: I1124 22:10:50.681932 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" path="/var/lib/kubelet/pods/a4979af2-5655-4075-97ee-a3bdea46e31b/volumes" Nov 24 22:10:54 crc kubenswrapper[4801]: I1124 22:10:54.664916 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:10:54 crc kubenswrapper[4801]: E1124 22:10:54.666439 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:11:05 crc kubenswrapper[4801]: I1124 22:11:05.664411 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:11:05 crc kubenswrapper[4801]: E1124 22:11:05.665458 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:11:19 crc kubenswrapper[4801]: I1124 22:11:19.663993 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:11:19 crc kubenswrapper[4801]: E1124 22:11:19.666177 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:11:34 crc kubenswrapper[4801]: I1124 22:11:34.664686 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:11:34 crc kubenswrapper[4801]: I1124 22:11:34.964986 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"d61503fbc4f553d637ed474ccd31d892910fc3f1ae184b6fe1c81835a23a5623"} Nov 24 22:13:24 crc kubenswrapper[4801]: E1124 22:13:24.254115 4801 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.83:45976->38.102.83.83:34545: write tcp 38.102.83.83:45976->38.102.83.83:34545: write: broken pipe Nov 24 22:13:54 crc kubenswrapper[4801]: I1124 22:13:54.320758 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:13:54 crc kubenswrapper[4801]: I1124 22:13:54.321547 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:14:12 crc kubenswrapper[4801]: E1124 22:14:12.623963 4801 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.83:55208->38.102.83.83:34545: write tcp 38.102.83.83:55208->38.102.83.83:34545: write: broken pipe Nov 24 22:14:24 crc kubenswrapper[4801]: I1124 22:14:24.319572 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:14:24 crc kubenswrapper[4801]: I1124 22:14:24.320349 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.320200 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.320967 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.321034 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.322392 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d61503fbc4f553d637ed474ccd31d892910fc3f1ae184b6fe1c81835a23a5623"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.322495 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://d61503fbc4f553d637ed474ccd31d892910fc3f1ae184b6fe1c81835a23a5623" gracePeriod=600 Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.520879 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="d61503fbc4f553d637ed474ccd31d892910fc3f1ae184b6fe1c81835a23a5623" exitCode=0 Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.520953 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"d61503fbc4f553d637ed474ccd31d892910fc3f1ae184b6fe1c81835a23a5623"} Nov 24 22:14:54 crc kubenswrapper[4801]: I1124 22:14:54.521297 4801 scope.go:117] "RemoveContainer" containerID="2b5c5942403e94b55149d39fafade1a6d8d8a66c5e0b51e816cecc334eefe8ac" Nov 24 22:14:55 crc kubenswrapper[4801]: I1124 22:14:55.536680 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a"} Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.147026 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6"] Nov 24 22:15:00 crc kubenswrapper[4801]: E1124 22:15:00.148215 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="extract-content" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.148234 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="extract-content" Nov 24 22:15:00 crc kubenswrapper[4801]: E1124 22:15:00.148270 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="registry-server" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.148276 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="registry-server" Nov 24 22:15:00 crc kubenswrapper[4801]: E1124 22:15:00.148291 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="extract-utilities" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.148297 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="extract-utilities" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.148587 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4979af2-5655-4075-97ee-a3bdea46e31b" containerName="registry-server" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.149452 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.151531 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.151727 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.172524 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6"] Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.177062 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3278058a-0948-4f2e-97ce-21ebea7da60d-config-volume\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.177310 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxq58\" (UniqueName: \"kubernetes.io/projected/3278058a-0948-4f2e-97ce-21ebea7da60d-kube-api-access-rxq58\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.177666 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3278058a-0948-4f2e-97ce-21ebea7da60d-secret-volume\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.280160 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3278058a-0948-4f2e-97ce-21ebea7da60d-config-volume\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.280538 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxq58\" (UniqueName: \"kubernetes.io/projected/3278058a-0948-4f2e-97ce-21ebea7da60d-kube-api-access-rxq58\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.280738 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3278058a-0948-4f2e-97ce-21ebea7da60d-secret-volume\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.282410 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3278058a-0948-4f2e-97ce-21ebea7da60d-config-volume\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.287781 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3278058a-0948-4f2e-97ce-21ebea7da60d-secret-volume\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.299549 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxq58\" (UniqueName: \"kubernetes.io/projected/3278058a-0948-4f2e-97ce-21ebea7da60d-kube-api-access-rxq58\") pod \"collect-profiles-29400375-dmcr6\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.472332 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:00 crc kubenswrapper[4801]: I1124 22:15:00.990624 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6"] Nov 24 22:15:01 crc kubenswrapper[4801]: I1124 22:15:01.642929 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" event={"ID":"3278058a-0948-4f2e-97ce-21ebea7da60d","Type":"ContainerStarted","Data":"ee3e03ee40e50270c485875eff8d0d539dfaae63c4959b91bdf90cb9e814227c"} Nov 24 22:15:01 crc kubenswrapper[4801]: I1124 22:15:01.643287 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" event={"ID":"3278058a-0948-4f2e-97ce-21ebea7da60d","Type":"ContainerStarted","Data":"a41714d57f7ea8f6a8c6c285bda09b415555cf5238b0b946f0eb45ad4bffa076"} Nov 24 22:15:01 crc kubenswrapper[4801]: I1124 22:15:01.677899 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" podStartSLOduration=1.677880415 podStartE2EDuration="1.677880415s" podCreationTimestamp="2025-11-24 22:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:15:01.671202811 +0000 UTC m=+4073.753789481" watchObservedRunningTime="2025-11-24 22:15:01.677880415 +0000 UTC m=+4073.760467085" Nov 24 22:15:02 crc kubenswrapper[4801]: I1124 22:15:02.659215 4801 generic.go:334] "Generic (PLEG): container finished" podID="3278058a-0948-4f2e-97ce-21ebea7da60d" containerID="ee3e03ee40e50270c485875eff8d0d539dfaae63c4959b91bdf90cb9e814227c" exitCode=0 Nov 24 22:15:02 crc kubenswrapper[4801]: I1124 22:15:02.659275 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" event={"ID":"3278058a-0948-4f2e-97ce-21ebea7da60d","Type":"ContainerDied","Data":"ee3e03ee40e50270c485875eff8d0d539dfaae63c4959b91bdf90cb9e814227c"} Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.117179 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.217840 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxq58\" (UniqueName: \"kubernetes.io/projected/3278058a-0948-4f2e-97ce-21ebea7da60d-kube-api-access-rxq58\") pod \"3278058a-0948-4f2e-97ce-21ebea7da60d\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.218203 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3278058a-0948-4f2e-97ce-21ebea7da60d-config-volume\") pod \"3278058a-0948-4f2e-97ce-21ebea7da60d\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.218261 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3278058a-0948-4f2e-97ce-21ebea7da60d-secret-volume\") pod \"3278058a-0948-4f2e-97ce-21ebea7da60d\" (UID: \"3278058a-0948-4f2e-97ce-21ebea7da60d\") " Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.219353 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3278058a-0948-4f2e-97ce-21ebea7da60d-config-volume" (OuterVolumeSpecName: "config-volume") pod "3278058a-0948-4f2e-97ce-21ebea7da60d" (UID: "3278058a-0948-4f2e-97ce-21ebea7da60d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.219910 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3278058a-0948-4f2e-97ce-21ebea7da60d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.225472 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3278058a-0948-4f2e-97ce-21ebea7da60d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3278058a-0948-4f2e-97ce-21ebea7da60d" (UID: "3278058a-0948-4f2e-97ce-21ebea7da60d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.228950 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3278058a-0948-4f2e-97ce-21ebea7da60d-kube-api-access-rxq58" (OuterVolumeSpecName: "kube-api-access-rxq58") pod "3278058a-0948-4f2e-97ce-21ebea7da60d" (UID: "3278058a-0948-4f2e-97ce-21ebea7da60d"). InnerVolumeSpecName "kube-api-access-rxq58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.322832 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxq58\" (UniqueName: \"kubernetes.io/projected/3278058a-0948-4f2e-97ce-21ebea7da60d-kube-api-access-rxq58\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.322886 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3278058a-0948-4f2e-97ce-21ebea7da60d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.699006 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" event={"ID":"3278058a-0948-4f2e-97ce-21ebea7da60d","Type":"ContainerDied","Data":"a41714d57f7ea8f6a8c6c285bda09b415555cf5238b0b946f0eb45ad4bffa076"} Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.699055 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a41714d57f7ea8f6a8c6c285bda09b415555cf5238b0b946f0eb45ad4bffa076" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.699112 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-dmcr6" Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.763088 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr"] Nov 24 22:15:04 crc kubenswrapper[4801]: I1124 22:15:04.775959 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-hn5rr"] Nov 24 22:15:06 crc kubenswrapper[4801]: I1124 22:15:06.680043 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357f85b7-cb19-4fe5-a2ca-009de3fdf2bc" path="/var/lib/kubelet/pods/357f85b7-cb19-4fe5-a2ca-009de3fdf2bc/volumes" Nov 24 22:15:15 crc kubenswrapper[4801]: I1124 22:15:15.924124 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhwm2"] Nov 24 22:15:15 crc kubenswrapper[4801]: E1124 22:15:15.927310 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3278058a-0948-4f2e-97ce-21ebea7da60d" containerName="collect-profiles" Nov 24 22:15:15 crc kubenswrapper[4801]: I1124 22:15:15.927522 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3278058a-0948-4f2e-97ce-21ebea7da60d" containerName="collect-profiles" Nov 24 22:15:15 crc kubenswrapper[4801]: I1124 22:15:15.928057 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3278058a-0948-4f2e-97ce-21ebea7da60d" containerName="collect-profiles" Nov 24 22:15:15 crc kubenswrapper[4801]: I1124 22:15:15.931565 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:15 crc kubenswrapper[4801]: I1124 22:15:15.935164 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhwm2"] Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.070567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-catalog-content\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.070643 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-utilities\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.070673 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5rjl\" (UniqueName: \"kubernetes.io/projected/7a3e81d9-ecfa-4075-bba5-5faac60a447d-kube-api-access-t5rjl\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.172753 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-catalog-content\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.172812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-utilities\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.172834 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5rjl\" (UniqueName: \"kubernetes.io/projected/7a3e81d9-ecfa-4075-bba5-5faac60a447d-kube-api-access-t5rjl\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.173573 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-catalog-content\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.173617 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-utilities\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.195641 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5rjl\" (UniqueName: \"kubernetes.io/projected/7a3e81d9-ecfa-4075-bba5-5faac60a447d-kube-api-access-t5rjl\") pod \"redhat-operators-lhwm2\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.256634 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.741227 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhwm2"] Nov 24 22:15:16 crc kubenswrapper[4801]: I1124 22:15:16.862679 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerStarted","Data":"c5c332315d9d29920a4cce884380280a153df0d94085fd58e8405668c2d2192c"} Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.721173 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snfpm"] Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.724392 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.749439 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snfpm"] Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.822729 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-catalog-content\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.823072 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsbm\" (UniqueName: \"kubernetes.io/projected/f99ffc9c-6a21-4525-92fe-d975c91d3afa-kube-api-access-bfsbm\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.823476 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-utilities\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.876730 4801 generic.go:334] "Generic (PLEG): container finished" podID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerID="1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5" exitCode=0 Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.876780 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerDied","Data":"1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5"} Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.928058 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-utilities\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.928337 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-catalog-content\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.928466 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsbm\" (UniqueName: \"kubernetes.io/projected/f99ffc9c-6a21-4525-92fe-d975c91d3afa-kube-api-access-bfsbm\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.928696 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-utilities\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.929033 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-catalog-content\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:17 crc kubenswrapper[4801]: I1124 22:15:17.952041 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfsbm\" (UniqueName: \"kubernetes.io/projected/f99ffc9c-6a21-4525-92fe-d975c91d3afa-kube-api-access-bfsbm\") pod \"certified-operators-snfpm\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:18 crc kubenswrapper[4801]: I1124 22:15:18.057946 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:18 crc kubenswrapper[4801]: I1124 22:15:18.600893 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snfpm"] Nov 24 22:15:18 crc kubenswrapper[4801]: I1124 22:15:18.890692 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerStarted","Data":"2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd"} Nov 24 22:15:18 crc kubenswrapper[4801]: I1124 22:15:18.898080 4801 generic.go:334] "Generic (PLEG): container finished" podID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerID="f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76" exitCode=0 Nov 24 22:15:18 crc kubenswrapper[4801]: I1124 22:15:18.898139 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerDied","Data":"f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76"} Nov 24 22:15:18 crc kubenswrapper[4801]: I1124 22:15:18.898175 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerStarted","Data":"a0e5fe1d9785782857c8dbd9b7b47287ee7f88b152ecabbd3d09a5bfb1995a20"} Nov 24 22:15:19 crc kubenswrapper[4801]: I1124 22:15:19.913182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerStarted","Data":"93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb"} Nov 24 22:15:22 crc kubenswrapper[4801]: I1124 22:15:22.955732 4801 generic.go:334] "Generic (PLEG): container finished" podID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerID="93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb" exitCode=0 Nov 24 22:15:22 crc kubenswrapper[4801]: I1124 22:15:22.955804 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerDied","Data":"93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb"} Nov 24 22:15:23 crc kubenswrapper[4801]: I1124 22:15:23.972700 4801 generic.go:334] "Generic (PLEG): container finished" podID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerID="2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd" exitCode=0 Nov 24 22:15:23 crc kubenswrapper[4801]: I1124 22:15:23.972789 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerDied","Data":"2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd"} Nov 24 22:15:24 crc kubenswrapper[4801]: I1124 22:15:24.988560 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerStarted","Data":"6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac"} Nov 24 22:15:24 crc kubenswrapper[4801]: I1124 22:15:24.991652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerStarted","Data":"c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa"} Nov 24 22:15:25 crc kubenswrapper[4801]: I1124 22:15:25.031698 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhwm2" podStartSLOduration=3.429766283 podStartE2EDuration="10.031667193s" podCreationTimestamp="2025-11-24 22:15:15 +0000 UTC" firstStartedPulling="2025-11-24 22:15:17.880158696 +0000 UTC m=+4089.962745366" lastFinishedPulling="2025-11-24 22:15:24.482059566 +0000 UTC m=+4096.564646276" observedRunningTime="2025-11-24 22:15:25.009595766 +0000 UTC m=+4097.092182486" watchObservedRunningTime="2025-11-24 22:15:25.031667193 +0000 UTC m=+4097.114253873" Nov 24 22:15:25 crc kubenswrapper[4801]: I1124 22:15:25.034731 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snfpm" podStartSLOduration=3.538508345 podStartE2EDuration="8.034716507s" podCreationTimestamp="2025-11-24 22:15:17 +0000 UTC" firstStartedPulling="2025-11-24 22:15:18.899874064 +0000 UTC m=+4090.982460724" lastFinishedPulling="2025-11-24 22:15:23.396082176 +0000 UTC m=+4095.478668886" observedRunningTime="2025-11-24 22:15:25.026587477 +0000 UTC m=+4097.109174167" watchObservedRunningTime="2025-11-24 22:15:25.034716507 +0000 UTC m=+4097.117303197" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.257306 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.257793 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.531195 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d5r8j"] Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.535454 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.565865 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5r8j"] Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.716544 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2lf\" (UniqueName: \"kubernetes.io/projected/c10816e0-2eac-4bbf-837a-d9d4e6297196-kube-api-access-tv2lf\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.717329 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-utilities\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.717684 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-catalog-content\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.818594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-catalog-content\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.818696 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2lf\" (UniqueName: \"kubernetes.io/projected/c10816e0-2eac-4bbf-837a-d9d4e6297196-kube-api-access-tv2lf\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.818791 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-utilities\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.819222 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-utilities\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.819220 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-catalog-content\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.840391 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2lf\" (UniqueName: \"kubernetes.io/projected/c10816e0-2eac-4bbf-837a-d9d4e6297196-kube-api-access-tv2lf\") pod \"redhat-marketplace-d5r8j\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:26 crc kubenswrapper[4801]: I1124 22:15:26.858699 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:27 crc kubenswrapper[4801]: I1124 22:15:27.311291 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhwm2" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" probeResult="failure" output=< Nov 24 22:15:27 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:15:27 crc kubenswrapper[4801]: > Nov 24 22:15:27 crc kubenswrapper[4801]: I1124 22:15:27.436321 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5r8j"] Nov 24 22:15:27 crc kubenswrapper[4801]: W1124 22:15:27.442276 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc10816e0_2eac_4bbf_837a_d9d4e6297196.slice/crio-26549cda404e12fb39f578c357dcdfcc17f77ea4df6600a5ba343861f967bbb7 WatchSource:0}: Error finding container 26549cda404e12fb39f578c357dcdfcc17f77ea4df6600a5ba343861f967bbb7: Status 404 returned error can't find the container with id 26549cda404e12fb39f578c357dcdfcc17f77ea4df6600a5ba343861f967bbb7 Nov 24 22:15:28 crc kubenswrapper[4801]: I1124 22:15:28.038545 4801 generic.go:334] "Generic (PLEG): container finished" podID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerID="a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b" exitCode=0 Nov 24 22:15:28 crc kubenswrapper[4801]: I1124 22:15:28.038636 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerDied","Data":"a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b"} Nov 24 22:15:28 crc kubenswrapper[4801]: I1124 22:15:28.038976 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerStarted","Data":"26549cda404e12fb39f578c357dcdfcc17f77ea4df6600a5ba343861f967bbb7"} Nov 24 22:15:28 crc kubenswrapper[4801]: I1124 22:15:28.058145 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:28 crc kubenswrapper[4801]: I1124 22:15:28.058194 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:29 crc kubenswrapper[4801]: I1124 22:15:29.053903 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerStarted","Data":"579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6"} Nov 24 22:15:29 crc kubenswrapper[4801]: I1124 22:15:29.110267 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-snfpm" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="registry-server" probeResult="failure" output=< Nov 24 22:15:29 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:15:29 crc kubenswrapper[4801]: > Nov 24 22:15:30 crc kubenswrapper[4801]: I1124 22:15:30.078722 4801 generic.go:334] "Generic (PLEG): container finished" podID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerID="579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6" exitCode=0 Nov 24 22:15:30 crc kubenswrapper[4801]: I1124 22:15:30.078801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerDied","Data":"579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6"} Nov 24 22:15:31 crc kubenswrapper[4801]: I1124 22:15:31.100171 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerStarted","Data":"f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94"} Nov 24 22:15:31 crc kubenswrapper[4801]: I1124 22:15:31.138186 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d5r8j" podStartSLOduration=2.673748396 podStartE2EDuration="5.138165317s" podCreationTimestamp="2025-11-24 22:15:26 +0000 UTC" firstStartedPulling="2025-11-24 22:15:28.042008709 +0000 UTC m=+4100.124595379" lastFinishedPulling="2025-11-24 22:15:30.50642562 +0000 UTC m=+4102.589012300" observedRunningTime="2025-11-24 22:15:31.118974548 +0000 UTC m=+4103.201561258" watchObservedRunningTime="2025-11-24 22:15:31.138165317 +0000 UTC m=+4103.220751997" Nov 24 22:15:36 crc kubenswrapper[4801]: I1124 22:15:36.859759 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:36 crc kubenswrapper[4801]: I1124 22:15:36.860546 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:37 crc kubenswrapper[4801]: I1124 22:15:37.301765 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhwm2" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" probeResult="failure" output=< Nov 24 22:15:37 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:15:37 crc kubenswrapper[4801]: > Nov 24 22:15:37 crc kubenswrapper[4801]: I1124 22:15:37.368434 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:37 crc kubenswrapper[4801]: I1124 22:15:37.422417 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:37 crc kubenswrapper[4801]: I1124 22:15:37.611817 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5r8j"] Nov 24 22:15:38 crc kubenswrapper[4801]: I1124 22:15:38.137819 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:38 crc kubenswrapper[4801]: I1124 22:15:38.208495 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.210505 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d5r8j" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="registry-server" containerID="cri-o://f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94" gracePeriod=2 Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.717908 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.883316 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-utilities\") pod \"c10816e0-2eac-4bbf-837a-d9d4e6297196\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.883813 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-catalog-content\") pod \"c10816e0-2eac-4bbf-837a-d9d4e6297196\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.884114 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv2lf\" (UniqueName: \"kubernetes.io/projected/c10816e0-2eac-4bbf-837a-d9d4e6297196-kube-api-access-tv2lf\") pod \"c10816e0-2eac-4bbf-837a-d9d4e6297196\" (UID: \"c10816e0-2eac-4bbf-837a-d9d4e6297196\") " Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.885164 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-utilities" (OuterVolumeSpecName: "utilities") pod "c10816e0-2eac-4bbf-837a-d9d4e6297196" (UID: "c10816e0-2eac-4bbf-837a-d9d4e6297196"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.894657 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10816e0-2eac-4bbf-837a-d9d4e6297196-kube-api-access-tv2lf" (OuterVolumeSpecName: "kube-api-access-tv2lf") pod "c10816e0-2eac-4bbf-837a-d9d4e6297196" (UID: "c10816e0-2eac-4bbf-837a-d9d4e6297196"). InnerVolumeSpecName "kube-api-access-tv2lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.905324 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c10816e0-2eac-4bbf-837a-d9d4e6297196" (UID: "c10816e0-2eac-4bbf-837a-d9d4e6297196"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.986989 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv2lf\" (UniqueName: \"kubernetes.io/projected/c10816e0-2eac-4bbf-837a-d9d4e6297196-kube-api-access-tv2lf\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.987033 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:39 crc kubenswrapper[4801]: I1124 22:15:39.987047 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10816e0-2eac-4bbf-837a-d9d4e6297196-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.217025 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snfpm"] Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.217484 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-snfpm" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="registry-server" containerID="cri-o://c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa" gracePeriod=2 Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.225073 4801 generic.go:334] "Generic (PLEG): container finished" podID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerID="f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94" exitCode=0 Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.225132 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerDied","Data":"f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94"} Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.225151 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5r8j" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.225180 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5r8j" event={"ID":"c10816e0-2eac-4bbf-837a-d9d4e6297196","Type":"ContainerDied","Data":"26549cda404e12fb39f578c357dcdfcc17f77ea4df6600a5ba343861f967bbb7"} Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.225209 4801 scope.go:117] "RemoveContainer" containerID="f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.261062 4801 scope.go:117] "RemoveContainer" containerID="579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.280728 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5r8j"] Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.295945 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5r8j"] Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.346103 4801 scope.go:117] "RemoveContainer" containerID="a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.494037 4801 scope.go:117] "RemoveContainer" containerID="f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94" Nov 24 22:15:40 crc kubenswrapper[4801]: E1124 22:15:40.496663 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94\": container with ID starting with f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94 not found: ID does not exist" containerID="f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.496709 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94"} err="failed to get container status \"f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94\": rpc error: code = NotFound desc = could not find container \"f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94\": container with ID starting with f9df41e1dd69a302008893401043c745f8b47529f14d1c227c511e2d5a76ce94 not found: ID does not exist" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.496741 4801 scope.go:117] "RemoveContainer" containerID="579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6" Nov 24 22:15:40 crc kubenswrapper[4801]: E1124 22:15:40.498626 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6\": container with ID starting with 579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6 not found: ID does not exist" containerID="579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.498663 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6"} err="failed to get container status \"579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6\": rpc error: code = NotFound desc = could not find container \"579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6\": container with ID starting with 579ed5e7c2b8061b8368fbf7849cb2332eccc9150cf0dba6f2805a3bcc533fd6 not found: ID does not exist" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.498684 4801 scope.go:117] "RemoveContainer" containerID="a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b" Nov 24 22:15:40 crc kubenswrapper[4801]: E1124 22:15:40.500074 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b\": container with ID starting with a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b not found: ID does not exist" containerID="a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.500146 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b"} err="failed to get container status \"a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b\": rpc error: code = NotFound desc = could not find container \"a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b\": container with ID starting with a402c46b6ad85e892b662fb630705470999b288e8c160ec633c26c617752592b not found: ID does not exist" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.679096 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" path="/var/lib/kubelet/pods/c10816e0-2eac-4bbf-837a-d9d4e6297196/volumes" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.764904 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.809128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-catalog-content\") pod \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.809567 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfsbm\" (UniqueName: \"kubernetes.io/projected/f99ffc9c-6a21-4525-92fe-d975c91d3afa-kube-api-access-bfsbm\") pod \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.809804 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-utilities\") pod \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\" (UID: \"f99ffc9c-6a21-4525-92fe-d975c91d3afa\") " Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.812115 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-utilities" (OuterVolumeSpecName: "utilities") pod "f99ffc9c-6a21-4525-92fe-d975c91d3afa" (UID: "f99ffc9c-6a21-4525-92fe-d975c91d3afa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.820595 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99ffc9c-6a21-4525-92fe-d975c91d3afa-kube-api-access-bfsbm" (OuterVolumeSpecName: "kube-api-access-bfsbm") pod "f99ffc9c-6a21-4525-92fe-d975c91d3afa" (UID: "f99ffc9c-6a21-4525-92fe-d975c91d3afa"). InnerVolumeSpecName "kube-api-access-bfsbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.870769 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f99ffc9c-6a21-4525-92fe-d975c91d3afa" (UID: "f99ffc9c-6a21-4525-92fe-d975c91d3afa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.913119 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.913160 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f99ffc9c-6a21-4525-92fe-d975c91d3afa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:40 crc kubenswrapper[4801]: I1124 22:15:40.913173 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfsbm\" (UniqueName: \"kubernetes.io/projected/f99ffc9c-6a21-4525-92fe-d975c91d3afa-kube-api-access-bfsbm\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.241393 4801 generic.go:334] "Generic (PLEG): container finished" podID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerID="c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa" exitCode=0 Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.241441 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerDied","Data":"c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa"} Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.241472 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snfpm" event={"ID":"f99ffc9c-6a21-4525-92fe-d975c91d3afa","Type":"ContainerDied","Data":"a0e5fe1d9785782857c8dbd9b7b47287ee7f88b152ecabbd3d09a5bfb1995a20"} Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.241495 4801 scope.go:117] "RemoveContainer" containerID="c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.241662 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snfpm" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.282825 4801 scope.go:117] "RemoveContainer" containerID="93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.283642 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snfpm"] Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.293726 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-snfpm"] Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.304313 4801 scope.go:117] "RemoveContainer" containerID="f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.334158 4801 scope.go:117] "RemoveContainer" containerID="c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa" Nov 24 22:15:41 crc kubenswrapper[4801]: E1124 22:15:41.334802 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa\": container with ID starting with c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa not found: ID does not exist" containerID="c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.334861 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa"} err="failed to get container status \"c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa\": rpc error: code = NotFound desc = could not find container \"c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa\": container with ID starting with c772250e4e998ed8d4f859f4043bfbd33bfb7e78ccc0551c9bb34a47904ca9aa not found: ID does not exist" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.334896 4801 scope.go:117] "RemoveContainer" containerID="93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb" Nov 24 22:15:41 crc kubenswrapper[4801]: E1124 22:15:41.335274 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb\": container with ID starting with 93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb not found: ID does not exist" containerID="93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.335316 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb"} err="failed to get container status \"93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb\": rpc error: code = NotFound desc = could not find container \"93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb\": container with ID starting with 93e83a299e3ca481471a87fdfc74b51e298bf65612ba8922fa4f9ecae64d04cb not found: ID does not exist" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.335348 4801 scope.go:117] "RemoveContainer" containerID="f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76" Nov 24 22:15:41 crc kubenswrapper[4801]: E1124 22:15:41.335616 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76\": container with ID starting with f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76 not found: ID does not exist" containerID="f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76" Nov 24 22:15:41 crc kubenswrapper[4801]: I1124 22:15:41.335646 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76"} err="failed to get container status \"f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76\": rpc error: code = NotFound desc = could not find container \"f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76\": container with ID starting with f0864937adab49e5250e5d208a58b301147190368017418176895b02ef01ec76 not found: ID does not exist" Nov 24 22:15:42 crc kubenswrapper[4801]: I1124 22:15:42.692072 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" path="/var/lib/kubelet/pods/f99ffc9c-6a21-4525-92fe-d975c91d3afa/volumes" Nov 24 22:15:47 crc kubenswrapper[4801]: I1124 22:15:47.315690 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhwm2" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" probeResult="failure" output=< Nov 24 22:15:47 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:15:47 crc kubenswrapper[4801]: > Nov 24 22:15:49 crc kubenswrapper[4801]: I1124 22:15:49.691783 4801 scope.go:117] "RemoveContainer" containerID="5d7c9bb7c45997a3d3c6a1b0c418b0533992edbe2ccf55e6029143a9766a39ee" Nov 24 22:15:56 crc kubenswrapper[4801]: I1124 22:15:56.307760 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:56 crc kubenswrapper[4801]: I1124 22:15:56.362466 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:56 crc kubenswrapper[4801]: I1124 22:15:56.553322 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhwm2"] Nov 24 22:15:57 crc kubenswrapper[4801]: I1124 22:15:57.471220 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhwm2" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" containerID="cri-o://6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac" gracePeriod=2 Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.023788 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.194099 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-catalog-content\") pod \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.194421 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-utilities\") pod \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.194640 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5rjl\" (UniqueName: \"kubernetes.io/projected/7a3e81d9-ecfa-4075-bba5-5faac60a447d-kube-api-access-t5rjl\") pod \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\" (UID: \"7a3e81d9-ecfa-4075-bba5-5faac60a447d\") " Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.195127 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-utilities" (OuterVolumeSpecName: "utilities") pod "7a3e81d9-ecfa-4075-bba5-5faac60a447d" (UID: "7a3e81d9-ecfa-4075-bba5-5faac60a447d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.195472 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.204815 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3e81d9-ecfa-4075-bba5-5faac60a447d-kube-api-access-t5rjl" (OuterVolumeSpecName: "kube-api-access-t5rjl") pod "7a3e81d9-ecfa-4075-bba5-5faac60a447d" (UID: "7a3e81d9-ecfa-4075-bba5-5faac60a447d"). InnerVolumeSpecName "kube-api-access-t5rjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.290078 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a3e81d9-ecfa-4075-bba5-5faac60a447d" (UID: "7a3e81d9-ecfa-4075-bba5-5faac60a447d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.297415 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5rjl\" (UniqueName: \"kubernetes.io/projected/7a3e81d9-ecfa-4075-bba5-5faac60a447d-kube-api-access-t5rjl\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.297449 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3e81d9-ecfa-4075-bba5-5faac60a447d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.486704 4801 generic.go:334] "Generic (PLEG): container finished" podID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerID="6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac" exitCode=0 Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.486748 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerDied","Data":"6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac"} Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.486780 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhwm2" event={"ID":"7a3e81d9-ecfa-4075-bba5-5faac60a447d","Type":"ContainerDied","Data":"c5c332315d9d29920a4cce884380280a153df0d94085fd58e8405668c2d2192c"} Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.486799 4801 scope.go:117] "RemoveContainer" containerID="6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.487007 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhwm2" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.528131 4801 scope.go:117] "RemoveContainer" containerID="2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.536150 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhwm2"] Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.550979 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhwm2"] Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.556072 4801 scope.go:117] "RemoveContainer" containerID="1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.610483 4801 scope.go:117] "RemoveContainer" containerID="6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac" Nov 24 22:15:58 crc kubenswrapper[4801]: E1124 22:15:58.611872 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac\": container with ID starting with 6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac not found: ID does not exist" containerID="6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.611914 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac"} err="failed to get container status \"6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac\": rpc error: code = NotFound desc = could not find container \"6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac\": container with ID starting with 6413318363c99dd6c913c88fccf7b121ad2ee0bbc6732eefc5070aacdaef68ac not found: ID does not exist" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.611945 4801 scope.go:117] "RemoveContainer" containerID="2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd" Nov 24 22:15:58 crc kubenswrapper[4801]: E1124 22:15:58.612286 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd\": container with ID starting with 2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd not found: ID does not exist" containerID="2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.612338 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd"} err="failed to get container status \"2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd\": rpc error: code = NotFound desc = could not find container \"2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd\": container with ID starting with 2801704bd194634c368aa2bdc9aef252e846de313c4dd37789ff9981b2e1a7bd not found: ID does not exist" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.612358 4801 scope.go:117] "RemoveContainer" containerID="1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5" Nov 24 22:15:58 crc kubenswrapper[4801]: E1124 22:15:58.612671 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5\": container with ID starting with 1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5 not found: ID does not exist" containerID="1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.612769 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5"} err="failed to get container status \"1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5\": rpc error: code = NotFound desc = could not find container \"1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5\": container with ID starting with 1e5699a7c5b06a9ae94b0a6e30e14efdf152f3795438363457fab9de1aa13cc5 not found: ID does not exist" Nov 24 22:15:58 crc kubenswrapper[4801]: I1124 22:15:58.676818 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" path="/var/lib/kubelet/pods/7a3e81d9-ecfa-4075-bba5-5faac60a447d/volumes" Nov 24 22:16:54 crc kubenswrapper[4801]: I1124 22:16:54.319491 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:16:54 crc kubenswrapper[4801]: I1124 22:16:54.320069 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:17:24 crc kubenswrapper[4801]: I1124 22:17:24.319796 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:17:24 crc kubenswrapper[4801]: I1124 22:17:24.320445 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:17:54 crc kubenswrapper[4801]: I1124 22:17:54.320164 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:17:54 crc kubenswrapper[4801]: I1124 22:17:54.320837 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:17:54 crc kubenswrapper[4801]: I1124 22:17:54.320926 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:17:54 crc kubenswrapper[4801]: I1124 22:17:54.322157 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:17:54 crc kubenswrapper[4801]: I1124 22:17:54.322224 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" gracePeriod=600 Nov 24 22:17:54 crc kubenswrapper[4801]: E1124 22:17:54.450763 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:17:55 crc kubenswrapper[4801]: I1124 22:17:55.036094 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" exitCode=0 Nov 24 22:17:55 crc kubenswrapper[4801]: I1124 22:17:55.036963 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a"} Nov 24 22:17:55 crc kubenswrapper[4801]: I1124 22:17:55.037049 4801 scope.go:117] "RemoveContainer" containerID="d61503fbc4f553d637ed474ccd31d892910fc3f1ae184b6fe1c81835a23a5623" Nov 24 22:17:55 crc kubenswrapper[4801]: I1124 22:17:55.038266 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:17:55 crc kubenswrapper[4801]: E1124 22:17:55.038673 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:18:09 crc kubenswrapper[4801]: I1124 22:18:09.665239 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:18:09 crc kubenswrapper[4801]: E1124 22:18:09.666349 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:18:23 crc kubenswrapper[4801]: I1124 22:18:23.666306 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:18:23 crc kubenswrapper[4801]: E1124 22:18:23.667975 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:18:36 crc kubenswrapper[4801]: I1124 22:18:36.665836 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:18:36 crc kubenswrapper[4801]: E1124 22:18:36.667550 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:18:48 crc kubenswrapper[4801]: I1124 22:18:48.682918 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:18:48 crc kubenswrapper[4801]: E1124 22:18:48.684286 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:19:02 crc kubenswrapper[4801]: I1124 22:19:02.663828 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:19:02 crc kubenswrapper[4801]: E1124 22:19:02.664787 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:19:17 crc kubenswrapper[4801]: I1124 22:19:17.665509 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:19:17 crc kubenswrapper[4801]: E1124 22:19:17.666539 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:19:32 crc kubenswrapper[4801]: I1124 22:19:32.664932 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:19:32 crc kubenswrapper[4801]: E1124 22:19:32.666273 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:19:44 crc kubenswrapper[4801]: I1124 22:19:44.665653 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:19:44 crc kubenswrapper[4801]: E1124 22:19:44.666740 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:19:57 crc kubenswrapper[4801]: I1124 22:19:57.665158 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:19:57 crc kubenswrapper[4801]: E1124 22:19:57.666128 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:20:10 crc kubenswrapper[4801]: I1124 22:20:10.664646 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:20:10 crc kubenswrapper[4801]: E1124 22:20:10.665853 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:20:24 crc kubenswrapper[4801]: I1124 22:20:24.670537 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:20:24 crc kubenswrapper[4801]: E1124 22:20:24.671244 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:20:38 crc kubenswrapper[4801]: I1124 22:20:38.680179 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:20:38 crc kubenswrapper[4801]: E1124 22:20:38.681105 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.616441 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t67sj"] Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.617856 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.617882 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.617911 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="extract-utilities" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.617925 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="extract-utilities" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.617962 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="extract-content" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.617974 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="extract-content" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.618007 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618020 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.618065 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="extract-content" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618077 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="extract-content" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.618110 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618124 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.618149 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="extract-utilities" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618161 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="extract-utilities" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.618195 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="extract-content" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618206 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="extract-content" Nov 24 22:20:44 crc kubenswrapper[4801]: E1124 22:20:44.618233 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="extract-utilities" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618244 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="extract-utilities" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618671 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3e81d9-ecfa-4075-bba5-5faac60a447d" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618707 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99ffc9c-6a21-4525-92fe-d975c91d3afa" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.618772 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10816e0-2eac-4bbf-837a-d9d4e6297196" containerName="registry-server" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.624166 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.652728 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t67sj"] Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.660081 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56gmh\" (UniqueName: \"kubernetes.io/projected/afdb2946-34cd-4555-97dd-c17e1d1241cd-kube-api-access-56gmh\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.660432 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-utilities\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.660862 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-catalog-content\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.763133 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-catalog-content\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.763505 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56gmh\" (UniqueName: \"kubernetes.io/projected/afdb2946-34cd-4555-97dd-c17e1d1241cd-kube-api-access-56gmh\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.763650 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-utilities\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.763758 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-catalog-content\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.764247 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-utilities\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.785338 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56gmh\" (UniqueName: \"kubernetes.io/projected/afdb2946-34cd-4555-97dd-c17e1d1241cd-kube-api-access-56gmh\") pod \"community-operators-t67sj\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:44 crc kubenswrapper[4801]: I1124 22:20:44.965032 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:45 crc kubenswrapper[4801]: I1124 22:20:45.545029 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t67sj"] Nov 24 22:20:46 crc kubenswrapper[4801]: I1124 22:20:46.559325 4801 generic.go:334] "Generic (PLEG): container finished" podID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerID="45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190" exitCode=0 Nov 24 22:20:46 crc kubenswrapper[4801]: I1124 22:20:46.559573 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerDied","Data":"45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190"} Nov 24 22:20:46 crc kubenswrapper[4801]: I1124 22:20:46.559985 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerStarted","Data":"9c3b7662d5cd405a96097b57231edca0055e077c8e82d81ef4a80e78b9aeb019"} Nov 24 22:20:46 crc kubenswrapper[4801]: I1124 22:20:46.563615 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:20:47 crc kubenswrapper[4801]: I1124 22:20:47.575997 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerStarted","Data":"811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1"} Nov 24 22:20:49 crc kubenswrapper[4801]: I1124 22:20:49.604210 4801 generic.go:334] "Generic (PLEG): container finished" podID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerID="811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1" exitCode=0 Nov 24 22:20:49 crc kubenswrapper[4801]: I1124 22:20:49.604290 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerDied","Data":"811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1"} Nov 24 22:20:50 crc kubenswrapper[4801]: I1124 22:20:50.620577 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerStarted","Data":"2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674"} Nov 24 22:20:50 crc kubenswrapper[4801]: I1124 22:20:50.656379 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t67sj" podStartSLOduration=3.152946806 podStartE2EDuration="6.656340685s" podCreationTimestamp="2025-11-24 22:20:44 +0000 UTC" firstStartedPulling="2025-11-24 22:20:46.563270619 +0000 UTC m=+4418.645857299" lastFinishedPulling="2025-11-24 22:20:50.066664498 +0000 UTC m=+4422.149251178" observedRunningTime="2025-11-24 22:20:50.651268949 +0000 UTC m=+4422.733855659" watchObservedRunningTime="2025-11-24 22:20:50.656340685 +0000 UTC m=+4422.738927355" Nov 24 22:20:52 crc kubenswrapper[4801]: I1124 22:20:52.667065 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:20:52 crc kubenswrapper[4801]: E1124 22:20:52.668517 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:20:54 crc kubenswrapper[4801]: I1124 22:20:54.965825 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:54 crc kubenswrapper[4801]: I1124 22:20:54.966452 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:20:56 crc kubenswrapper[4801]: I1124 22:20:56.060628 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-t67sj" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="registry-server" probeResult="failure" output=< Nov 24 22:20:56 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:20:56 crc kubenswrapper[4801]: > Nov 24 22:21:05 crc kubenswrapper[4801]: I1124 22:21:05.035070 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:21:05 crc kubenswrapper[4801]: I1124 22:21:05.889462 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:21:05 crc kubenswrapper[4801]: I1124 22:21:05.943411 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t67sj"] Nov 24 22:21:06 crc kubenswrapper[4801]: I1124 22:21:06.664894 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:21:06 crc kubenswrapper[4801]: E1124 22:21:06.665328 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:21:06 crc kubenswrapper[4801]: I1124 22:21:06.898222 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t67sj" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="registry-server" containerID="cri-o://2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674" gracePeriod=2 Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.497331 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.626292 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-catalog-content\") pod \"afdb2946-34cd-4555-97dd-c17e1d1241cd\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.626534 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-utilities\") pod \"afdb2946-34cd-4555-97dd-c17e1d1241cd\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.626630 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56gmh\" (UniqueName: \"kubernetes.io/projected/afdb2946-34cd-4555-97dd-c17e1d1241cd-kube-api-access-56gmh\") pod \"afdb2946-34cd-4555-97dd-c17e1d1241cd\" (UID: \"afdb2946-34cd-4555-97dd-c17e1d1241cd\") " Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.627470 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-utilities" (OuterVolumeSpecName: "utilities") pod "afdb2946-34cd-4555-97dd-c17e1d1241cd" (UID: "afdb2946-34cd-4555-97dd-c17e1d1241cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.636139 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afdb2946-34cd-4555-97dd-c17e1d1241cd-kube-api-access-56gmh" (OuterVolumeSpecName: "kube-api-access-56gmh") pod "afdb2946-34cd-4555-97dd-c17e1d1241cd" (UID: "afdb2946-34cd-4555-97dd-c17e1d1241cd"). InnerVolumeSpecName "kube-api-access-56gmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.680426 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afdb2946-34cd-4555-97dd-c17e1d1241cd" (UID: "afdb2946-34cd-4555-97dd-c17e1d1241cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.730276 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56gmh\" (UniqueName: \"kubernetes.io/projected/afdb2946-34cd-4555-97dd-c17e1d1241cd-kube-api-access-56gmh\") on node \"crc\" DevicePath \"\"" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.730315 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.730324 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afdb2946-34cd-4555-97dd-c17e1d1241cd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.918128 4801 generic.go:334] "Generic (PLEG): container finished" podID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerID="2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674" exitCode=0 Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.918178 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerDied","Data":"2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674"} Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.918206 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t67sj" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.918227 4801 scope.go:117] "RemoveContainer" containerID="2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.918213 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t67sj" event={"ID":"afdb2946-34cd-4555-97dd-c17e1d1241cd","Type":"ContainerDied","Data":"9c3b7662d5cd405a96097b57231edca0055e077c8e82d81ef4a80e78b9aeb019"} Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.958579 4801 scope.go:117] "RemoveContainer" containerID="811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1" Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.970087 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t67sj"] Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.981553 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t67sj"] Nov 24 22:21:07 crc kubenswrapper[4801]: I1124 22:21:07.983376 4801 scope.go:117] "RemoveContainer" containerID="45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.049573 4801 scope.go:117] "RemoveContainer" containerID="2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674" Nov 24 22:21:08 crc kubenswrapper[4801]: E1124 22:21:08.050455 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674\": container with ID starting with 2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674 not found: ID does not exist" containerID="2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.050487 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674"} err="failed to get container status \"2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674\": rpc error: code = NotFound desc = could not find container \"2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674\": container with ID starting with 2c05b46fad3813419fcac93def7e6dba4281984b112c9fa333fff37173bf3674 not found: ID does not exist" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.050509 4801 scope.go:117] "RemoveContainer" containerID="811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1" Nov 24 22:21:08 crc kubenswrapper[4801]: E1124 22:21:08.051193 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1\": container with ID starting with 811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1 not found: ID does not exist" containerID="811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.051213 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1"} err="failed to get container status \"811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1\": rpc error: code = NotFound desc = could not find container \"811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1\": container with ID starting with 811f90d95361a7c9a7e811dbb8a0aef3173593a0357f0714dbb46346429788a1 not found: ID does not exist" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.051225 4801 scope.go:117] "RemoveContainer" containerID="45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190" Nov 24 22:21:08 crc kubenswrapper[4801]: E1124 22:21:08.051647 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190\": container with ID starting with 45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190 not found: ID does not exist" containerID="45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.051666 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190"} err="failed to get container status \"45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190\": rpc error: code = NotFound desc = could not find container \"45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190\": container with ID starting with 45065eb2e195f2be90a63cf6d7c5f55254d470434006ff3369882839e768b190 not found: ID does not exist" Nov 24 22:21:08 crc kubenswrapper[4801]: I1124 22:21:08.690228 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" path="/var/lib/kubelet/pods/afdb2946-34cd-4555-97dd-c17e1d1241cd/volumes" Nov 24 22:21:17 crc kubenswrapper[4801]: I1124 22:21:17.664268 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:21:17 crc kubenswrapper[4801]: E1124 22:21:17.665323 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:21:29 crc kubenswrapper[4801]: I1124 22:21:29.665574 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:21:29 crc kubenswrapper[4801]: E1124 22:21:29.667116 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:21:42 crc kubenswrapper[4801]: I1124 22:21:42.664870 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:21:42 crc kubenswrapper[4801]: E1124 22:21:42.665879 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:21:57 crc kubenswrapper[4801]: I1124 22:21:57.077482 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:21:57 crc kubenswrapper[4801]: E1124 22:21:57.079577 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:22:10 crc kubenswrapper[4801]: I1124 22:22:10.664979 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:22:10 crc kubenswrapper[4801]: E1124 22:22:10.666052 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:22:22 crc kubenswrapper[4801]: I1124 22:22:22.665709 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:22:22 crc kubenswrapper[4801]: E1124 22:22:22.666912 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:22:36 crc kubenswrapper[4801]: I1124 22:22:36.665214 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:22:36 crc kubenswrapper[4801]: E1124 22:22:36.666297 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:22:50 crc kubenswrapper[4801]: I1124 22:22:50.665061 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:22:50 crc kubenswrapper[4801]: E1124 22:22:50.666253 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:23:05 crc kubenswrapper[4801]: I1124 22:23:05.664546 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:23:06 crc kubenswrapper[4801]: I1124 22:23:06.218705 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"852d5609684e479b8b1fd7886c79051f52bf37c175dd8deecb3587aee829f93c"} Nov 24 22:23:16 crc kubenswrapper[4801]: E1124 22:23:16.794925 4801 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.83:33972->38.102.83.83:34545: write tcp 38.102.83.83:33972->38.102.83.83:34545: write: broken pipe Nov 24 22:25:24 crc kubenswrapper[4801]: I1124 22:25:24.319917 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:25:24 crc kubenswrapper[4801]: I1124 22:25:24.320486 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.267511 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-584pb"] Nov 24 22:25:26 crc kubenswrapper[4801]: E1124 22:25:26.269266 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="extract-content" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.269287 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="extract-content" Nov 24 22:25:26 crc kubenswrapper[4801]: E1124 22:25:26.269318 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="registry-server" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.269327 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="registry-server" Nov 24 22:25:26 crc kubenswrapper[4801]: E1124 22:25:26.269358 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="extract-utilities" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.269368 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="extract-utilities" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.269717 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="afdb2946-34cd-4555-97dd-c17e1d1241cd" containerName="registry-server" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.272051 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.293570 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-584pb"] Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.441599 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnb4\" (UniqueName: \"kubernetes.io/projected/a6154f61-ec6a-48fd-a68f-a26248f35fe6-kube-api-access-btnb4\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.441712 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-catalog-content\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.441856 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-utilities\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.545328 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnb4\" (UniqueName: \"kubernetes.io/projected/a6154f61-ec6a-48fd-a68f-a26248f35fe6-kube-api-access-btnb4\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.545467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-catalog-content\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.545622 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-utilities\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.546227 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-catalog-content\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.546367 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-utilities\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.570548 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnb4\" (UniqueName: \"kubernetes.io/projected/a6154f61-ec6a-48fd-a68f-a26248f35fe6-kube-api-access-btnb4\") pod \"redhat-marketplace-584pb\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:26 crc kubenswrapper[4801]: I1124 22:25:26.608961 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:27 crc kubenswrapper[4801]: I1124 22:25:27.223562 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-584pb"] Nov 24 22:25:28 crc kubenswrapper[4801]: I1124 22:25:28.212008 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerStarted","Data":"d26d6800fc57f0e2a92f7ce292aafc7029e2063bfbff1ab8421506cbd2869e65"} Nov 24 22:25:29 crc kubenswrapper[4801]: I1124 22:25:29.225288 4801 generic.go:334] "Generic (PLEG): container finished" podID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerID="8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b" exitCode=0 Nov 24 22:25:29 crc kubenswrapper[4801]: I1124 22:25:29.225352 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerDied","Data":"8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b"} Nov 24 22:25:31 crc kubenswrapper[4801]: I1124 22:25:31.259098 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerStarted","Data":"058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3"} Nov 24 22:25:32 crc kubenswrapper[4801]: I1124 22:25:32.271449 4801 generic.go:334] "Generic (PLEG): container finished" podID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerID="058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3" exitCode=0 Nov 24 22:25:32 crc kubenswrapper[4801]: I1124 22:25:32.271546 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerDied","Data":"058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3"} Nov 24 22:25:33 crc kubenswrapper[4801]: I1124 22:25:33.284514 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerStarted","Data":"d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5"} Nov 24 22:25:33 crc kubenswrapper[4801]: I1124 22:25:33.316916 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-584pb" podStartSLOduration=3.790889047 podStartE2EDuration="7.316893606s" podCreationTimestamp="2025-11-24 22:25:26 +0000 UTC" firstStartedPulling="2025-11-24 22:25:29.228082624 +0000 UTC m=+4701.310669294" lastFinishedPulling="2025-11-24 22:25:32.754087143 +0000 UTC m=+4704.836673853" observedRunningTime="2025-11-24 22:25:33.310332702 +0000 UTC m=+4705.392919372" watchObservedRunningTime="2025-11-24 22:25:33.316893606 +0000 UTC m=+4705.399480286" Nov 24 22:25:36 crc kubenswrapper[4801]: I1124 22:25:36.609484 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:36 crc kubenswrapper[4801]: I1124 22:25:36.610149 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:36 crc kubenswrapper[4801]: I1124 22:25:36.686931 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:46 crc kubenswrapper[4801]: I1124 22:25:46.686606 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:46 crc kubenswrapper[4801]: I1124 22:25:46.767171 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-584pb"] Nov 24 22:25:47 crc kubenswrapper[4801]: I1124 22:25:47.453431 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-584pb" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="registry-server" containerID="cri-o://d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5" gracePeriod=2 Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.247005 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.392956 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-utilities\") pod \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.393123 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btnb4\" (UniqueName: \"kubernetes.io/projected/a6154f61-ec6a-48fd-a68f-a26248f35fe6-kube-api-access-btnb4\") pod \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.393663 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-catalog-content\") pod \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\" (UID: \"a6154f61-ec6a-48fd-a68f-a26248f35fe6\") " Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.393857 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-utilities" (OuterVolumeSpecName: "utilities") pod "a6154f61-ec6a-48fd-a68f-a26248f35fe6" (UID: "a6154f61-ec6a-48fd-a68f-a26248f35fe6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.394630 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.402633 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6154f61-ec6a-48fd-a68f-a26248f35fe6-kube-api-access-btnb4" (OuterVolumeSpecName: "kube-api-access-btnb4") pod "a6154f61-ec6a-48fd-a68f-a26248f35fe6" (UID: "a6154f61-ec6a-48fd-a68f-a26248f35fe6"). InnerVolumeSpecName "kube-api-access-btnb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.413590 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6154f61-ec6a-48fd-a68f-a26248f35fe6" (UID: "a6154f61-ec6a-48fd-a68f-a26248f35fe6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.480909 4801 generic.go:334] "Generic (PLEG): container finished" podID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerID="d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5" exitCode=0 Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.480958 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerDied","Data":"d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5"} Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.480990 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584pb" event={"ID":"a6154f61-ec6a-48fd-a68f-a26248f35fe6","Type":"ContainerDied","Data":"d26d6800fc57f0e2a92f7ce292aafc7029e2063bfbff1ab8421506cbd2869e65"} Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.481010 4801 scope.go:117] "RemoveContainer" containerID="d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.481200 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584pb" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.497265 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6154f61-ec6a-48fd-a68f-a26248f35fe6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.497297 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btnb4\" (UniqueName: \"kubernetes.io/projected/a6154f61-ec6a-48fd-a68f-a26248f35fe6-kube-api-access-btnb4\") on node \"crc\" DevicePath \"\"" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.516552 4801 scope.go:117] "RemoveContainer" containerID="058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.531624 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-584pb"] Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.549441 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-584pb"] Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.549680 4801 scope.go:117] "RemoveContainer" containerID="8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.627498 4801 scope.go:117] "RemoveContainer" containerID="d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5" Nov 24 22:25:48 crc kubenswrapper[4801]: E1124 22:25:48.627993 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5\": container with ID starting with d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5 not found: ID does not exist" containerID="d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.628066 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5"} err="failed to get container status \"d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5\": rpc error: code = NotFound desc = could not find container \"d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5\": container with ID starting with d365a4bc5e9f00582dac3a1b52518c54fc3c971e3ca2f2b13b42b355b5da55f5 not found: ID does not exist" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.628099 4801 scope.go:117] "RemoveContainer" containerID="058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3" Nov 24 22:25:48 crc kubenswrapper[4801]: E1124 22:25:48.628612 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3\": container with ID starting with 058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3 not found: ID does not exist" containerID="058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.628674 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3"} err="failed to get container status \"058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3\": rpc error: code = NotFound desc = could not find container \"058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3\": container with ID starting with 058ec0642c229a6bc05e0a597976ac53c60231fc2ec0f4529f53e716fce77ae3 not found: ID does not exist" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.628709 4801 scope.go:117] "RemoveContainer" containerID="8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b" Nov 24 22:25:48 crc kubenswrapper[4801]: E1124 22:25:48.629073 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b\": container with ID starting with 8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b not found: ID does not exist" containerID="8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.629097 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b"} err="failed to get container status \"8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b\": rpc error: code = NotFound desc = could not find container \"8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b\": container with ID starting with 8b4cf7f56246991784d18a422daa4e2c28606e01edd200c257608fb001e1571b not found: ID does not exist" Nov 24 22:25:48 crc kubenswrapper[4801]: I1124 22:25:48.676202 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" path="/var/lib/kubelet/pods/a6154f61-ec6a-48fd-a68f-a26248f35fe6/volumes" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.344737 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f76vc"] Nov 24 22:25:51 crc kubenswrapper[4801]: E1124 22:25:51.345903 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="registry-server" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.345922 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="registry-server" Nov 24 22:25:51 crc kubenswrapper[4801]: E1124 22:25:51.345955 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="extract-utilities" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.345963 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="extract-utilities" Nov 24 22:25:51 crc kubenswrapper[4801]: E1124 22:25:51.345988 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="extract-content" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.345998 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="extract-content" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.346356 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6154f61-ec6a-48fd-a68f-a26248f35fe6" containerName="registry-server" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.348586 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.375800 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f76vc"] Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.488813 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.489238 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdstn\" (UniqueName: \"kubernetes.io/projected/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-kube-api-access-hdstn\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.489443 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-utilities\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.591705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdstn\" (UniqueName: \"kubernetes.io/projected/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-kube-api-access-hdstn\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.591783 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-utilities\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.591948 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.592276 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-utilities\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.592545 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.611337 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdstn\" (UniqueName: \"kubernetes.io/projected/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-kube-api-access-hdstn\") pod \"redhat-operators-f76vc\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:51 crc kubenswrapper[4801]: I1124 22:25:51.684890 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:25:52 crc kubenswrapper[4801]: I1124 22:25:52.200394 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f76vc"] Nov 24 22:25:52 crc kubenswrapper[4801]: I1124 22:25:52.525959 4801 generic.go:334] "Generic (PLEG): container finished" podID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerID="32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82" exitCode=0 Nov 24 22:25:52 crc kubenswrapper[4801]: I1124 22:25:52.526076 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerDied","Data":"32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82"} Nov 24 22:25:52 crc kubenswrapper[4801]: I1124 22:25:52.526272 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerStarted","Data":"f751e9ba8f8f2e037d98f1815fc815c7dbb88db1a474e22a71606eba222885c3"} Nov 24 22:25:52 crc kubenswrapper[4801]: I1124 22:25:52.527973 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:25:54 crc kubenswrapper[4801]: I1124 22:25:54.320072 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:25:54 crc kubenswrapper[4801]: I1124 22:25:54.320858 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:25:54 crc kubenswrapper[4801]: I1124 22:25:54.555508 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerStarted","Data":"58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80"} Nov 24 22:25:59 crc kubenswrapper[4801]: I1124 22:25:59.639424 4801 generic.go:334] "Generic (PLEG): container finished" podID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerID="58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80" exitCode=0 Nov 24 22:25:59 crc kubenswrapper[4801]: I1124 22:25:59.639507 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerDied","Data":"58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80"} Nov 24 22:26:00 crc kubenswrapper[4801]: I1124 22:26:00.655736 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerStarted","Data":"a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e"} Nov 24 22:26:00 crc kubenswrapper[4801]: I1124 22:26:00.677739 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f76vc" podStartSLOduration=1.9568942059999999 podStartE2EDuration="9.677714063s" podCreationTimestamp="2025-11-24 22:25:51 +0000 UTC" firstStartedPulling="2025-11-24 22:25:52.527737447 +0000 UTC m=+4724.610324117" lastFinishedPulling="2025-11-24 22:26:00.248557264 +0000 UTC m=+4732.331143974" observedRunningTime="2025-11-24 22:26:00.676031111 +0000 UTC m=+4732.758617801" watchObservedRunningTime="2025-11-24 22:26:00.677714063 +0000 UTC m=+4732.760300743" Nov 24 22:26:01 crc kubenswrapper[4801]: I1124 22:26:01.685927 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:26:01 crc kubenswrapper[4801]: I1124 22:26:01.686235 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:26:02 crc kubenswrapper[4801]: I1124 22:26:02.781242 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f76vc" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="registry-server" probeResult="failure" output=< Nov 24 22:26:02 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:26:02 crc kubenswrapper[4801]: > Nov 24 22:26:13 crc kubenswrapper[4801]: I1124 22:26:13.383943 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f76vc" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="registry-server" probeResult="failure" output=< Nov 24 22:26:13 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:26:13 crc kubenswrapper[4801]: > Nov 24 22:26:21 crc kubenswrapper[4801]: I1124 22:26:21.773042 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:26:21 crc kubenswrapper[4801]: I1124 22:26:21.846868 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:26:22 crc kubenswrapper[4801]: I1124 22:26:22.567561 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f76vc"] Nov 24 22:26:22 crc kubenswrapper[4801]: I1124 22:26:22.990558 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f76vc" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="registry-server" containerID="cri-o://a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e" gracePeriod=2 Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.686424 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.750553 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdstn\" (UniqueName: \"kubernetes.io/projected/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-kube-api-access-hdstn\") pod \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.750772 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-utilities\") pod \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.750900 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content\") pod \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.753491 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-utilities" (OuterVolumeSpecName: "utilities") pod "d3d5c555-2e2d-47b6-95c0-43b8c81285f9" (UID: "d3d5c555-2e2d-47b6-95c0-43b8c81285f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.763126 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-kube-api-access-hdstn" (OuterVolumeSpecName: "kube-api-access-hdstn") pod "d3d5c555-2e2d-47b6-95c0-43b8c81285f9" (UID: "d3d5c555-2e2d-47b6-95c0-43b8c81285f9"). InnerVolumeSpecName "kube-api-access-hdstn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.851925 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d5c555-2e2d-47b6-95c0-43b8c81285f9" (UID: "d3d5c555-2e2d-47b6-95c0-43b8c81285f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.853320 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content\") pod \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\" (UID: \"d3d5c555-2e2d-47b6-95c0-43b8c81285f9\") " Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.854174 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.854199 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdstn\" (UniqueName: \"kubernetes.io/projected/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-kube-api-access-hdstn\") on node \"crc\" DevicePath \"\"" Nov 24 22:26:23 crc kubenswrapper[4801]: W1124 22:26:23.855584 4801 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d3d5c555-2e2d-47b6-95c0-43b8c81285f9/volumes/kubernetes.io~empty-dir/catalog-content Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.855660 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d5c555-2e2d-47b6-95c0-43b8c81285f9" (UID: "d3d5c555-2e2d-47b6-95c0-43b8c81285f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:26:23 crc kubenswrapper[4801]: I1124 22:26:23.956532 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d5c555-2e2d-47b6-95c0-43b8c81285f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.007503 4801 generic.go:334] "Generic (PLEG): container finished" podID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerID="a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e" exitCode=0 Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.007549 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerDied","Data":"a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e"} Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.007587 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f76vc" event={"ID":"d3d5c555-2e2d-47b6-95c0-43b8c81285f9","Type":"ContainerDied","Data":"f751e9ba8f8f2e037d98f1815fc815c7dbb88db1a474e22a71606eba222885c3"} Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.007607 4801 scope.go:117] "RemoveContainer" containerID="a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.007633 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f76vc" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.047423 4801 scope.go:117] "RemoveContainer" containerID="58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.058600 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f76vc"] Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.069350 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f76vc"] Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.089401 4801 scope.go:117] "RemoveContainer" containerID="32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.164228 4801 scope.go:117] "RemoveContainer" containerID="a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e" Nov 24 22:26:24 crc kubenswrapper[4801]: E1124 22:26:24.164814 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e\": container with ID starting with a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e not found: ID does not exist" containerID="a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.164900 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e"} err="failed to get container status \"a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e\": rpc error: code = NotFound desc = could not find container \"a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e\": container with ID starting with a8b6666105a0d101a5101e0eb77344cfaa3147ad4eec0f2a6640b5c6eb34151e not found: ID does not exist" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.164946 4801 scope.go:117] "RemoveContainer" containerID="58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80" Nov 24 22:26:24 crc kubenswrapper[4801]: E1124 22:26:24.165777 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80\": container with ID starting with 58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80 not found: ID does not exist" containerID="58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.165824 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80"} err="failed to get container status \"58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80\": rpc error: code = NotFound desc = could not find container \"58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80\": container with ID starting with 58e7c8728c7eb5a58158cecbdf0c4ab57242d7a0859d5164cec5b68e191f8f80 not found: ID does not exist" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.165854 4801 scope.go:117] "RemoveContainer" containerID="32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82" Nov 24 22:26:24 crc kubenswrapper[4801]: E1124 22:26:24.166684 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82\": container with ID starting with 32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82 not found: ID does not exist" containerID="32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.166722 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82"} err="failed to get container status \"32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82\": rpc error: code = NotFound desc = could not find container \"32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82\": container with ID starting with 32880ca0c804095268b2157ae798dc9ac497577a15eba10ede36cbcb1a1fae82 not found: ID does not exist" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.322972 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.323102 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.323215 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.324486 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"852d5609684e479b8b1fd7886c79051f52bf37c175dd8deecb3587aee829f93c"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.324610 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://852d5609684e479b8b1fd7886c79051f52bf37c175dd8deecb3587aee829f93c" gracePeriod=600 Nov 24 22:26:24 crc kubenswrapper[4801]: I1124 22:26:24.677214 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" path="/var/lib/kubelet/pods/d3d5c555-2e2d-47b6-95c0-43b8c81285f9/volumes" Nov 24 22:26:26 crc kubenswrapper[4801]: I1124 22:26:26.036421 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="852d5609684e479b8b1fd7886c79051f52bf37c175dd8deecb3587aee829f93c" exitCode=0 Nov 24 22:26:26 crc kubenswrapper[4801]: I1124 22:26:26.036470 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"852d5609684e479b8b1fd7886c79051f52bf37c175dd8deecb3587aee829f93c"} Nov 24 22:26:26 crc kubenswrapper[4801]: I1124 22:26:26.036857 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf"} Nov 24 22:26:26 crc kubenswrapper[4801]: I1124 22:26:26.036891 4801 scope.go:117] "RemoveContainer" containerID="9f4908126c08fcbfde66b6b076eef8cac0d84d72e61c7de541f167eba45d164a" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.535157 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x994p"] Nov 24 22:26:45 crc kubenswrapper[4801]: E1124 22:26:45.536433 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="extract-utilities" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.536468 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="extract-utilities" Nov 24 22:26:45 crc kubenswrapper[4801]: E1124 22:26:45.536530 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="registry-server" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.536539 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="registry-server" Nov 24 22:26:45 crc kubenswrapper[4801]: E1124 22:26:45.536574 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="extract-content" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.536582 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="extract-content" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.536945 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d5c555-2e2d-47b6-95c0-43b8c81285f9" containerName="registry-server" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.539484 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.562314 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x994p"] Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.617668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e4732c-aa5f-4328-965a-2224085345a6-utilities\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.617848 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e4732c-aa5f-4328-965a-2224085345a6-catalog-content\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.617893 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvz6h\" (UniqueName: \"kubernetes.io/projected/c7e4732c-aa5f-4328-965a-2224085345a6-kube-api-access-rvz6h\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.720202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e4732c-aa5f-4328-965a-2224085345a6-utilities\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.720339 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e4732c-aa5f-4328-965a-2224085345a6-catalog-content\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.720438 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvz6h\" (UniqueName: \"kubernetes.io/projected/c7e4732c-aa5f-4328-965a-2224085345a6-kube-api-access-rvz6h\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.720970 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e4732c-aa5f-4328-965a-2224085345a6-utilities\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.721037 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e4732c-aa5f-4328-965a-2224085345a6-catalog-content\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.755460 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvz6h\" (UniqueName: \"kubernetes.io/projected/c7e4732c-aa5f-4328-965a-2224085345a6-kube-api-access-rvz6h\") pod \"certified-operators-x994p\" (UID: \"c7e4732c-aa5f-4328-965a-2224085345a6\") " pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:45 crc kubenswrapper[4801]: I1124 22:26:45.883785 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:26:46 crc kubenswrapper[4801]: I1124 22:26:46.438896 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x994p"] Nov 24 22:26:47 crc kubenswrapper[4801]: I1124 22:26:47.331798 4801 generic.go:334] "Generic (PLEG): container finished" podID="c7e4732c-aa5f-4328-965a-2224085345a6" containerID="8d69144f79395e798e0e1f438cb95a194896a1b874bfc5eb33329589c4121dbb" exitCode=0 Nov 24 22:26:47 crc kubenswrapper[4801]: I1124 22:26:47.331897 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x994p" event={"ID":"c7e4732c-aa5f-4328-965a-2224085345a6","Type":"ContainerDied","Data":"8d69144f79395e798e0e1f438cb95a194896a1b874bfc5eb33329589c4121dbb"} Nov 24 22:26:47 crc kubenswrapper[4801]: I1124 22:26:47.332168 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x994p" event={"ID":"c7e4732c-aa5f-4328-965a-2224085345a6","Type":"ContainerStarted","Data":"e3558bd2ad7486524e00959c1d8206bae6fc4e4cf85e893a53601d9ea054ee00"} Nov 24 22:26:54 crc kubenswrapper[4801]: I1124 22:26:54.413675 4801 generic.go:334] "Generic (PLEG): container finished" podID="c7e4732c-aa5f-4328-965a-2224085345a6" containerID="c39f46495f8e63c7a4ac717c6164b827d600e3cbfaffdff421cf31163caedf1f" exitCode=0 Nov 24 22:26:54 crc kubenswrapper[4801]: I1124 22:26:54.413738 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x994p" event={"ID":"c7e4732c-aa5f-4328-965a-2224085345a6","Type":"ContainerDied","Data":"c39f46495f8e63c7a4ac717c6164b827d600e3cbfaffdff421cf31163caedf1f"} Nov 24 22:26:56 crc kubenswrapper[4801]: I1124 22:26:56.698052 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x994p" event={"ID":"c7e4732c-aa5f-4328-965a-2224085345a6","Type":"ContainerStarted","Data":"4caea4473245c0ed8f3534e4bbbb9f377f06111d1aa5555e0704739ad4df2ce5"} Nov 24 22:26:56 crc kubenswrapper[4801]: I1124 22:26:56.718315 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x994p" podStartSLOduration=4.218401461 podStartE2EDuration="11.718288991s" podCreationTimestamp="2025-11-24 22:26:45 +0000 UTC" firstStartedPulling="2025-11-24 22:26:47.335275283 +0000 UTC m=+4779.417861953" lastFinishedPulling="2025-11-24 22:26:54.835162773 +0000 UTC m=+4786.917749483" observedRunningTime="2025-11-24 22:26:56.707699092 +0000 UTC m=+4788.790285762" watchObservedRunningTime="2025-11-24 22:26:56.718288991 +0000 UTC m=+4788.800875671" Nov 24 22:27:05 crc kubenswrapper[4801]: I1124 22:27:05.885053 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:27:05 crc kubenswrapper[4801]: I1124 22:27:05.885726 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:27:05 crc kubenswrapper[4801]: I1124 22:27:05.968415 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:27:07 crc kubenswrapper[4801]: I1124 22:27:07.384086 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x994p" Nov 24 22:27:07 crc kubenswrapper[4801]: I1124 22:27:07.494822 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x994p"] Nov 24 22:27:07 crc kubenswrapper[4801]: I1124 22:27:07.556676 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4vwq"] Nov 24 22:27:07 crc kubenswrapper[4801]: I1124 22:27:07.556981 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4vwq" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="registry-server" containerID="cri-o://ec3f085764451c4644e36e00396aa7d6cdc054d2b7c454098530c2b219eb8198" gracePeriod=2 Nov 24 22:27:07 crc kubenswrapper[4801]: I1124 22:27:07.882797 4801 generic.go:334] "Generic (PLEG): container finished" podID="4943a025-64de-4062-9c0d-51e219ce174b" containerID="ec3f085764451c4644e36e00396aa7d6cdc054d2b7c454098530c2b219eb8198" exitCode=0 Nov 24 22:27:07 crc kubenswrapper[4801]: I1124 22:27:07.883808 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerDied","Data":"ec3f085764451c4644e36e00396aa7d6cdc054d2b7c454098530c2b219eb8198"} Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.110712 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.190285 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-catalog-content\") pod \"4943a025-64de-4062-9c0d-51e219ce174b\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.190471 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbm74\" (UniqueName: \"kubernetes.io/projected/4943a025-64de-4062-9c0d-51e219ce174b-kube-api-access-dbm74\") pod \"4943a025-64de-4062-9c0d-51e219ce174b\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.190579 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-utilities\") pod \"4943a025-64de-4062-9c0d-51e219ce174b\" (UID: \"4943a025-64de-4062-9c0d-51e219ce174b\") " Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.191042 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-utilities" (OuterVolumeSpecName: "utilities") pod "4943a025-64de-4062-9c0d-51e219ce174b" (UID: "4943a025-64de-4062-9c0d-51e219ce174b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.191499 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.197646 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4943a025-64de-4062-9c0d-51e219ce174b-kube-api-access-dbm74" (OuterVolumeSpecName: "kube-api-access-dbm74") pod "4943a025-64de-4062-9c0d-51e219ce174b" (UID: "4943a025-64de-4062-9c0d-51e219ce174b"). InnerVolumeSpecName "kube-api-access-dbm74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.250914 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4943a025-64de-4062-9c0d-51e219ce174b" (UID: "4943a025-64de-4062-9c0d-51e219ce174b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.295161 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbm74\" (UniqueName: \"kubernetes.io/projected/4943a025-64de-4062-9c0d-51e219ce174b-kube-api-access-dbm74\") on node \"crc\" DevicePath \"\"" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.295202 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4943a025-64de-4062-9c0d-51e219ce174b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.897091 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4vwq" event={"ID":"4943a025-64de-4062-9c0d-51e219ce174b","Type":"ContainerDied","Data":"67e1e2af0fd00f64d528b4928beac641b1b9f8d86a65f414e38eda840207dd03"} Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.897320 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4vwq" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.897560 4801 scope.go:117] "RemoveContainer" containerID="ec3f085764451c4644e36e00396aa7d6cdc054d2b7c454098530c2b219eb8198" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.952849 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4vwq"] Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.954581 4801 scope.go:117] "RemoveContainer" containerID="2303e49a4c54e8efcff3923dccda7aa39488208facf99e424835fa7d970150d9" Nov 24 22:27:08 crc kubenswrapper[4801]: I1124 22:27:08.968389 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4vwq"] Nov 24 22:27:09 crc kubenswrapper[4801]: I1124 22:27:09.545994 4801 scope.go:117] "RemoveContainer" containerID="c3d3c595f9c1dc416494e7587daad6f25eccb92d6aa241364b87202c278ecdd9" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.059190 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 22:27:10 crc kubenswrapper[4801]: E1124 22:27:10.060025 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="extract-utilities" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.060038 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="extract-utilities" Nov 24 22:27:10 crc kubenswrapper[4801]: E1124 22:27:10.060079 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="extract-content" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.060085 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="extract-content" Nov 24 22:27:10 crc kubenswrapper[4801]: E1124 22:27:10.060112 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="registry-server" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.060118 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="registry-server" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.060367 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4943a025-64de-4062-9c0d-51e219ce174b" containerName="registry-server" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.061195 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.063902 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.064087 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.064300 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bmhbq" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.064902 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.074086 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.143309 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.143497 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.143595 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-config-data\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246595 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twg4f\" (UniqueName: \"kubernetes.io/projected/0b7936e0-8a45-4c32-a7a9-8323443c4274-kube-api-access-twg4f\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246648 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246695 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246723 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-config-data\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246786 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246839 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246873 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246893 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.246928 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.247873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.249622 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-config-data\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.254041 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.349512 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twg4f\" (UniqueName: \"kubernetes.io/projected/0b7936e0-8a45-4c32-a7a9-8323443c4274-kube-api-access-twg4f\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.349673 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.349840 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.349952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.350020 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.350099 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.350830 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.350854 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.353901 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.354242 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.354666 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.379336 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twg4f\" (UniqueName: \"kubernetes.io/projected/0b7936e0-8a45-4c32-a7a9-8323443c4274-kube-api-access-twg4f\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.399503 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " pod="openstack/tempest-tests-tempest" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.685759 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4943a025-64de-4062-9c0d-51e219ce174b" path="/var/lib/kubelet/pods/4943a025-64de-4062-9c0d-51e219ce174b/volumes" Nov 24 22:27:10 crc kubenswrapper[4801]: I1124 22:27:10.686102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 22:27:11 crc kubenswrapper[4801]: W1124 22:27:11.231851 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b7936e0_8a45_4c32_a7a9_8323443c4274.slice/crio-1bc3375d55f3eaf0e7719a843ead3f75b3237d7d57da159c5c07ed617b2a529d WatchSource:0}: Error finding container 1bc3375d55f3eaf0e7719a843ead3f75b3237d7d57da159c5c07ed617b2a529d: Status 404 returned error can't find the container with id 1bc3375d55f3eaf0e7719a843ead3f75b3237d7d57da159c5c07ed617b2a529d Nov 24 22:27:11 crc kubenswrapper[4801]: I1124 22:27:11.234255 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 22:27:11 crc kubenswrapper[4801]: I1124 22:27:11.936356 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0b7936e0-8a45-4c32-a7a9-8323443c4274","Type":"ContainerStarted","Data":"1bc3375d55f3eaf0e7719a843ead3f75b3237d7d57da159c5c07ed617b2a529d"} Nov 24 22:27:21 crc kubenswrapper[4801]: I1124 22:27:21.716481 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-db4b2" podUID="5d6ad10f-790a-49ed-aff0-13a1bd01c476" containerName="nmstate-handler" probeResult="failure" output="command timed out" Nov 24 22:27:45 crc kubenswrapper[4801]: E1124 22:27:45.682703 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 22:27:45 crc kubenswrapper[4801]: E1124 22:27:45.687743 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twg4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(0b7936e0-8a45-4c32-a7a9-8323443c4274): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 22:27:45 crc kubenswrapper[4801]: E1124 22:27:45.689039 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="0b7936e0-8a45-4c32-a7a9-8323443c4274" Nov 24 22:27:46 crc kubenswrapper[4801]: E1124 22:27:46.390837 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="0b7936e0-8a45-4c32-a7a9-8323443c4274" Nov 24 22:28:02 crc kubenswrapper[4801]: I1124 22:28:02.645052 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0b7936e0-8a45-4c32-a7a9-8323443c4274","Type":"ContainerStarted","Data":"26cc21e8b8ff03dca6dda5f2dd2c72c44101370b727d73d3a6e4f149f4e014da"} Nov 24 22:28:02 crc kubenswrapper[4801]: I1124 22:28:02.687213 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.755489347 podStartE2EDuration="53.687186567s" podCreationTimestamp="2025-11-24 22:27:09 +0000 UTC" firstStartedPulling="2025-11-24 22:27:11.236073041 +0000 UTC m=+4803.318659751" lastFinishedPulling="2025-11-24 22:28:00.167770281 +0000 UTC m=+4852.250356971" observedRunningTime="2025-11-24 22:28:02.668108243 +0000 UTC m=+4854.750694913" watchObservedRunningTime="2025-11-24 22:28:02.687186567 +0000 UTC m=+4854.769773247" Nov 24 22:28:54 crc kubenswrapper[4801]: I1124 22:28:54.322464 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:28:54 crc kubenswrapper[4801]: I1124 22:28:54.322974 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:29:24 crc kubenswrapper[4801]: I1124 22:29:24.320229 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:29:24 crc kubenswrapper[4801]: I1124 22:29:24.322411 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:29:54 crc kubenswrapper[4801]: I1124 22:29:54.319712 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:29:54 crc kubenswrapper[4801]: I1124 22:29:54.320221 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:29:54 crc kubenswrapper[4801]: I1124 22:29:54.321723 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:29:54 crc kubenswrapper[4801]: I1124 22:29:54.324346 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:29:54 crc kubenswrapper[4801]: I1124 22:29:54.324470 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" gracePeriod=600 Nov 24 22:29:54 crc kubenswrapper[4801]: E1124 22:29:54.467524 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:29:55 crc kubenswrapper[4801]: I1124 22:29:55.028199 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" exitCode=0 Nov 24 22:29:55 crc kubenswrapper[4801]: I1124 22:29:55.028287 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf"} Nov 24 22:29:55 crc kubenswrapper[4801]: I1124 22:29:55.031167 4801 scope.go:117] "RemoveContainer" containerID="852d5609684e479b8b1fd7886c79051f52bf37c175dd8deecb3587aee829f93c" Nov 24 22:29:55 crc kubenswrapper[4801]: I1124 22:29:55.031894 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:29:55 crc kubenswrapper[4801]: E1124 22:29:55.033106 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.385494 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk"] Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.401262 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.418689 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.418707 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.434997 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk"] Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.470588 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ncp\" (UniqueName: \"kubernetes.io/projected/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-kube-api-access-l5ncp\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.470692 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-secret-volume\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.471201 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-config-volume\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.574541 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-config-volume\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.574731 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ncp\" (UniqueName: \"kubernetes.io/projected/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-kube-api-access-l5ncp\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.574807 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-secret-volume\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.582070 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-config-volume\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.595102 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-secret-volume\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.595305 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ncp\" (UniqueName: \"kubernetes.io/projected/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-kube-api-access-l5ncp\") pod \"collect-profiles-29400390-9ppsk\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:00 crc kubenswrapper[4801]: I1124 22:30:00.738957 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:02 crc kubenswrapper[4801]: I1124 22:30:02.053311 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk"] Nov 24 22:30:02 crc kubenswrapper[4801]: I1124 22:30:02.108004 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" event={"ID":"34e7c0f7-39de-4519-9b5b-d1c8c51264ee","Type":"ContainerStarted","Data":"d8c757153ac0b75b450cedbe399b65d90d09d8c7ae578a0ecb3959ceb9e45a92"} Nov 24 22:30:03 crc kubenswrapper[4801]: I1124 22:30:03.121081 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" event={"ID":"34e7c0f7-39de-4519-9b5b-d1c8c51264ee","Type":"ContainerStarted","Data":"8dbc20aed5dbce6bf46a434974069be27f5b4141e8c85f3e70402ca903a32973"} Nov 24 22:30:03 crc kubenswrapper[4801]: I1124 22:30:03.145977 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" podStartSLOduration=3.138909674 podStartE2EDuration="3.138909674s" podCreationTimestamp="2025-11-24 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:30:03.134069744 +0000 UTC m=+4975.216656414" watchObservedRunningTime="2025-11-24 22:30:03.138909674 +0000 UTC m=+4975.221496344" Nov 24 22:30:04 crc kubenswrapper[4801]: I1124 22:30:04.133113 4801 generic.go:334] "Generic (PLEG): container finished" podID="34e7c0f7-39de-4519-9b5b-d1c8c51264ee" containerID="8dbc20aed5dbce6bf46a434974069be27f5b4141e8c85f3e70402ca903a32973" exitCode=0 Nov 24 22:30:04 crc kubenswrapper[4801]: I1124 22:30:04.133196 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" event={"ID":"34e7c0f7-39de-4519-9b5b-d1c8c51264ee","Type":"ContainerDied","Data":"8dbc20aed5dbce6bf46a434974069be27f5b4141e8c85f3e70402ca903a32973"} Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.852174 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.923560 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-config-volume\") pod \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.923638 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ncp\" (UniqueName: \"kubernetes.io/projected/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-kube-api-access-l5ncp\") pod \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.923671 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-secret-volume\") pod \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\" (UID: \"34e7c0f7-39de-4519-9b5b-d1c8c51264ee\") " Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.930608 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "34e7c0f7-39de-4519-9b5b-d1c8c51264ee" (UID: "34e7c0f7-39de-4519-9b5b-d1c8c51264ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.941398 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-kube-api-access-l5ncp" (OuterVolumeSpecName: "kube-api-access-l5ncp") pod "34e7c0f7-39de-4519-9b5b-d1c8c51264ee" (UID: "34e7c0f7-39de-4519-9b5b-d1c8c51264ee"). InnerVolumeSpecName "kube-api-access-l5ncp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:30:05 crc kubenswrapper[4801]: I1124 22:30:05.953318 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34e7c0f7-39de-4519-9b5b-d1c8c51264ee" (UID: "34e7c0f7-39de-4519-9b5b-d1c8c51264ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.027455 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.027490 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5ncp\" (UniqueName: \"kubernetes.io/projected/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-kube-api-access-l5ncp\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.027500 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e7c0f7-39de-4519-9b5b-d1c8c51264ee-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.157133 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" event={"ID":"34e7c0f7-39de-4519-9b5b-d1c8c51264ee","Type":"ContainerDied","Data":"d8c757153ac0b75b450cedbe399b65d90d09d8c7ae578a0ecb3959ceb9e45a92"} Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.157190 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c757153ac0b75b450cedbe399b65d90d09d8c7ae578a0ecb3959ceb9e45a92" Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.157192 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-9ppsk" Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.235610 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9"] Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.247149 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-fc9l9"] Nov 24 22:30:06 crc kubenswrapper[4801]: I1124 22:30:06.680478 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd0c6ae-984a-45e2-a334-bc22a349db84" path="/var/lib/kubelet/pods/dbd0c6ae-984a-45e2-a334-bc22a349db84/volumes" Nov 24 22:30:08 crc kubenswrapper[4801]: I1124 22:30:08.694799 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:30:08 crc kubenswrapper[4801]: E1124 22:30:08.695493 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:30:22 crc kubenswrapper[4801]: I1124 22:30:22.664535 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:30:22 crc kubenswrapper[4801]: E1124 22:30:22.665654 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:30:35 crc kubenswrapper[4801]: I1124 22:30:35.664421 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:30:35 crc kubenswrapper[4801]: E1124 22:30:35.665591 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:30:49 crc kubenswrapper[4801]: I1124 22:30:49.666001 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:30:49 crc kubenswrapper[4801]: E1124 22:30:49.667669 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:30:53 crc kubenswrapper[4801]: I1124 22:30:53.527097 4801 scope.go:117] "RemoveContainer" containerID="dc22f23ab780848ba0f10694a28bed06707e7863da9529967a9febcf1463b4ea" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.609098 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ds9jh"] Nov 24 22:30:59 crc kubenswrapper[4801]: E1124 22:30:59.611718 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e7c0f7-39de-4519-9b5b-d1c8c51264ee" containerName="collect-profiles" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.611761 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e7c0f7-39de-4519-9b5b-d1c8c51264ee" containerName="collect-profiles" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.612616 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e7c0f7-39de-4519-9b5b-d1c8c51264ee" containerName="collect-profiles" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.615409 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.628013 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ds9jh"] Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.657438 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-catalog-content\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.657660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-utilities\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.657710 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pssh8\" (UniqueName: \"kubernetes.io/projected/05c1f6b3-4634-4918-bec5-2da64dec01df-kube-api-access-pssh8\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.760262 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-catalog-content\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.760498 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-utilities\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.760766 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pssh8\" (UniqueName: \"kubernetes.io/projected/05c1f6b3-4634-4918-bec5-2da64dec01df-kube-api-access-pssh8\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.763468 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-catalog-content\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.763517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-utilities\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.781426 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pssh8\" (UniqueName: \"kubernetes.io/projected/05c1f6b3-4634-4918-bec5-2da64dec01df-kube-api-access-pssh8\") pod \"community-operators-ds9jh\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:30:59 crc kubenswrapper[4801]: I1124 22:30:59.950266 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:00 crc kubenswrapper[4801]: I1124 22:31:00.469641 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ds9jh"] Nov 24 22:31:00 crc kubenswrapper[4801]: I1124 22:31:00.836414 4801 generic.go:334] "Generic (PLEG): container finished" podID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerID="ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5" exitCode=0 Nov 24 22:31:00 crc kubenswrapper[4801]: I1124 22:31:00.836491 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerDied","Data":"ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5"} Nov 24 22:31:00 crc kubenswrapper[4801]: I1124 22:31:00.836816 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerStarted","Data":"4a88a240eaefd5889db3168c70670ccc30f29210417da14d12956dd5ba718133"} Nov 24 22:31:00 crc kubenswrapper[4801]: I1124 22:31:00.846294 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:31:01 crc kubenswrapper[4801]: I1124 22:31:01.851657 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerStarted","Data":"53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6"} Nov 24 22:31:02 crc kubenswrapper[4801]: I1124 22:31:02.862018 4801 generic.go:334] "Generic (PLEG): container finished" podID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerID="53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6" exitCode=0 Nov 24 22:31:02 crc kubenswrapper[4801]: I1124 22:31:02.862422 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerDied","Data":"53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6"} Nov 24 22:31:03 crc kubenswrapper[4801]: I1124 22:31:03.664626 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:31:03 crc kubenswrapper[4801]: E1124 22:31:03.665803 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:31:03 crc kubenswrapper[4801]: I1124 22:31:03.891585 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerStarted","Data":"181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611"} Nov 24 22:31:03 crc kubenswrapper[4801]: I1124 22:31:03.925464 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ds9jh" podStartSLOduration=2.424087746 podStartE2EDuration="4.925436365s" podCreationTimestamp="2025-11-24 22:30:59 +0000 UTC" firstStartedPulling="2025-11-24 22:31:00.838452924 +0000 UTC m=+5032.921039594" lastFinishedPulling="2025-11-24 22:31:03.339801543 +0000 UTC m=+5035.422388213" observedRunningTime="2025-11-24 22:31:03.921712529 +0000 UTC m=+5036.004299199" watchObservedRunningTime="2025-11-24 22:31:03.925436365 +0000 UTC m=+5036.008023035" Nov 24 22:31:09 crc kubenswrapper[4801]: I1124 22:31:09.950619 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:09 crc kubenswrapper[4801]: I1124 22:31:09.951134 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:10 crc kubenswrapper[4801]: I1124 22:31:10.032295 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:10 crc kubenswrapper[4801]: I1124 22:31:10.109228 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:10 crc kubenswrapper[4801]: I1124 22:31:10.280709 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ds9jh"] Nov 24 22:31:12 crc kubenswrapper[4801]: I1124 22:31:12.007537 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ds9jh" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="registry-server" containerID="cri-o://181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611" gracePeriod=2 Nov 24 22:31:12 crc kubenswrapper[4801]: I1124 22:31:12.962798 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.021775 4801 generic.go:334] "Generic (PLEG): container finished" podID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerID="181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611" exitCode=0 Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.021822 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerDied","Data":"181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611"} Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.021852 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ds9jh" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.021870 4801 scope.go:117] "RemoveContainer" containerID="181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.021858 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ds9jh" event={"ID":"05c1f6b3-4634-4918-bec5-2da64dec01df","Type":"ContainerDied","Data":"4a88a240eaefd5889db3168c70670ccc30f29210417da14d12956dd5ba718133"} Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.054394 4801 scope.go:117] "RemoveContainer" containerID="53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.086751 4801 scope.go:117] "RemoveContainer" containerID="ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.141846 4801 scope.go:117] "RemoveContainer" containerID="181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611" Nov 24 22:31:13 crc kubenswrapper[4801]: E1124 22:31:13.144626 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611\": container with ID starting with 181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611 not found: ID does not exist" containerID="181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.144686 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611"} err="failed to get container status \"181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611\": rpc error: code = NotFound desc = could not find container \"181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611\": container with ID starting with 181a8eecf965070acef3cdfbbe11a95ae44f5e7d53bb0552ac6088bc78829611 not found: ID does not exist" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.144718 4801 scope.go:117] "RemoveContainer" containerID="53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6" Nov 24 22:31:13 crc kubenswrapper[4801]: E1124 22:31:13.145184 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6\": container with ID starting with 53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6 not found: ID does not exist" containerID="53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.145230 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6"} err="failed to get container status \"53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6\": rpc error: code = NotFound desc = could not find container \"53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6\": container with ID starting with 53805a9090fe5c4247eeedbee489e6da92025ea8d0b8a433373a6c9ded205ca6 not found: ID does not exist" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.145258 4801 scope.go:117] "RemoveContainer" containerID="ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5" Nov 24 22:31:13 crc kubenswrapper[4801]: E1124 22:31:13.145572 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5\": container with ID starting with ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5 not found: ID does not exist" containerID="ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.145610 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5"} err="failed to get container status \"ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5\": rpc error: code = NotFound desc = could not find container \"ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5\": container with ID starting with ff5b47709e352717ea2b7cad36e1b96ae6a9480e8da5f73439ca8b416c68aef5 not found: ID does not exist" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.147751 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-utilities\") pod \"05c1f6b3-4634-4918-bec5-2da64dec01df\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.148049 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pssh8\" (UniqueName: \"kubernetes.io/projected/05c1f6b3-4634-4918-bec5-2da64dec01df-kube-api-access-pssh8\") pod \"05c1f6b3-4634-4918-bec5-2da64dec01df\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.148323 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-catalog-content\") pod \"05c1f6b3-4634-4918-bec5-2da64dec01df\" (UID: \"05c1f6b3-4634-4918-bec5-2da64dec01df\") " Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.148514 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-utilities" (OuterVolumeSpecName: "utilities") pod "05c1f6b3-4634-4918-bec5-2da64dec01df" (UID: "05c1f6b3-4634-4918-bec5-2da64dec01df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.149105 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.158271 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c1f6b3-4634-4918-bec5-2da64dec01df-kube-api-access-pssh8" (OuterVolumeSpecName: "kube-api-access-pssh8") pod "05c1f6b3-4634-4918-bec5-2da64dec01df" (UID: "05c1f6b3-4634-4918-bec5-2da64dec01df"). InnerVolumeSpecName "kube-api-access-pssh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.202023 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05c1f6b3-4634-4918-bec5-2da64dec01df" (UID: "05c1f6b3-4634-4918-bec5-2da64dec01df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.251421 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c1f6b3-4634-4918-bec5-2da64dec01df-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.251463 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pssh8\" (UniqueName: \"kubernetes.io/projected/05c1f6b3-4634-4918-bec5-2da64dec01df-kube-api-access-pssh8\") on node \"crc\" DevicePath \"\"" Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.362128 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ds9jh"] Nov 24 22:31:13 crc kubenswrapper[4801]: I1124 22:31:13.371782 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ds9jh"] Nov 24 22:31:14 crc kubenswrapper[4801]: I1124 22:31:14.676823 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" path="/var/lib/kubelet/pods/05c1f6b3-4634-4918-bec5-2da64dec01df/volumes" Nov 24 22:31:16 crc kubenswrapper[4801]: I1124 22:31:16.664425 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:31:16 crc kubenswrapper[4801]: E1124 22:31:16.665152 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:31:27 crc kubenswrapper[4801]: I1124 22:31:27.665027 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:31:27 crc kubenswrapper[4801]: E1124 22:31:27.665864 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:31:41 crc kubenswrapper[4801]: I1124 22:31:41.664414 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:31:41 crc kubenswrapper[4801]: E1124 22:31:41.665285 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:31:53 crc kubenswrapper[4801]: I1124 22:31:53.664899 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:31:53 crc kubenswrapper[4801]: E1124 22:31:53.665701 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:32:07 crc kubenswrapper[4801]: I1124 22:32:07.664293 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:32:07 crc kubenswrapper[4801]: E1124 22:32:07.665057 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:32:20 crc kubenswrapper[4801]: I1124 22:32:20.664466 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:32:20 crc kubenswrapper[4801]: E1124 22:32:20.665380 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:32:33 crc kubenswrapper[4801]: I1124 22:32:33.664767 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:32:33 crc kubenswrapper[4801]: E1124 22:32:33.665770 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:32:44 crc kubenswrapper[4801]: I1124 22:32:44.663957 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:32:44 crc kubenswrapper[4801]: E1124 22:32:44.665207 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:32:59 crc kubenswrapper[4801]: I1124 22:32:59.664885 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:32:59 crc kubenswrapper[4801]: E1124 22:32:59.665784 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:33:09 crc kubenswrapper[4801]: I1124 22:33:09.728741 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:33:09 crc kubenswrapper[4801]: I1124 22:33:09.730594 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:33:13 crc kubenswrapper[4801]: I1124 22:33:13.664683 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:33:13 crc kubenswrapper[4801]: E1124 22:33:13.667139 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:33:26 crc kubenswrapper[4801]: I1124 22:33:26.665308 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:33:26 crc kubenswrapper[4801]: E1124 22:33:26.666629 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:33:41 crc kubenswrapper[4801]: I1124 22:33:41.664769 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:33:41 crc kubenswrapper[4801]: E1124 22:33:41.665693 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:33:52 crc kubenswrapper[4801]: I1124 22:33:52.664659 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:33:52 crc kubenswrapper[4801]: E1124 22:33:52.665322 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:34:03 crc kubenswrapper[4801]: I1124 22:34:03.665020 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:34:03 crc kubenswrapper[4801]: E1124 22:34:03.666498 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:34:14 crc kubenswrapper[4801]: I1124 22:34:14.665384 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:34:14 crc kubenswrapper[4801]: E1124 22:34:14.666351 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:34:29 crc kubenswrapper[4801]: I1124 22:34:29.665075 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:34:29 crc kubenswrapper[4801]: E1124 22:34:29.666782 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:34:42 crc kubenswrapper[4801]: I1124 22:34:42.665196 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:34:42 crc kubenswrapper[4801]: E1124 22:34:42.666362 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:34:51 crc kubenswrapper[4801]: I1124 22:34:51.143758 4801 trace.go:236] Trace[169234584]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (24-Nov-2025 22:34:49.836) (total time: 1305ms): Nov 24 22:34:51 crc kubenswrapper[4801]: Trace[169234584]: [1.305049998s] [1.305049998s] END Nov 24 22:34:54 crc kubenswrapper[4801]: I1124 22:34:54.664292 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:34:55 crc kubenswrapper[4801]: I1124 22:34:55.052286 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"c4e7f9e05deb22e46632041a6e4bb9c24330b58908343670c6d572cd01824154"} Nov 24 22:35:09 crc kubenswrapper[4801]: I1124 22:35:09.716103 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:36:07 crc kubenswrapper[4801]: I1124 22:36:07.715878 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c6e61e62-b039-4898-b4fa-f20160b67641" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:36:07 crc kubenswrapper[4801]: I1124 22:36:07.717846 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="c6e61e62-b039-4898-b4fa-f20160b67641" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:36:54 crc kubenswrapper[4801]: I1124 22:36:54.320453 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:36:54 crc kubenswrapper[4801]: I1124 22:36:54.321065 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:37:24 crc kubenswrapper[4801]: I1124 22:37:24.320312 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:37:24 crc kubenswrapper[4801]: I1124 22:37:24.321012 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.832971 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2kbj"] Nov 24 22:37:38 crc kubenswrapper[4801]: E1124 22:37:38.834198 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="extract-content" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.834218 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="extract-content" Nov 24 22:37:38 crc kubenswrapper[4801]: E1124 22:37:38.834246 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="registry-server" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.834256 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="registry-server" Nov 24 22:37:38 crc kubenswrapper[4801]: E1124 22:37:38.834290 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="extract-utilities" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.834299 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="extract-utilities" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.834629 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c1f6b3-4634-4918-bec5-2da64dec01df" containerName="registry-server" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.836828 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.858155 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2kbj"] Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.972357 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-utilities\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.972534 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-catalog-content\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:38 crc kubenswrapper[4801]: I1124 22:37:38.972844 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8btm\" (UniqueName: \"kubernetes.io/projected/636bf702-8b35-4a6c-9775-a0203728f483-kube-api-access-r8btm\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.075886 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-utilities\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.075954 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-catalog-content\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.076531 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-utilities\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.076564 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-catalog-content\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.076651 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8btm\" (UniqueName: \"kubernetes.io/projected/636bf702-8b35-4a6c-9775-a0203728f483-kube-api-access-r8btm\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.128411 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8btm\" (UniqueName: \"kubernetes.io/projected/636bf702-8b35-4a6c-9775-a0203728f483-kube-api-access-r8btm\") pod \"redhat-operators-t2kbj\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.164397 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:39 crc kubenswrapper[4801]: I1124 22:37:39.699269 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2kbj"] Nov 24 22:37:40 crc kubenswrapper[4801]: I1124 22:37:40.296681 4801 generic.go:334] "Generic (PLEG): container finished" podID="636bf702-8b35-4a6c-9775-a0203728f483" containerID="837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52" exitCode=0 Nov 24 22:37:40 crc kubenswrapper[4801]: I1124 22:37:40.296780 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerDied","Data":"837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52"} Nov 24 22:37:40 crc kubenswrapper[4801]: I1124 22:37:40.296965 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerStarted","Data":"42cbbc2a58a3079e4941e555502645096d674fe47871d2743836e97194d52488"} Nov 24 22:37:40 crc kubenswrapper[4801]: I1124 22:37:40.301731 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:37:41 crc kubenswrapper[4801]: I1124 22:37:41.307350 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerStarted","Data":"ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b"} Nov 24 22:37:45 crc kubenswrapper[4801]: I1124 22:37:45.359521 4801 generic.go:334] "Generic (PLEG): container finished" podID="636bf702-8b35-4a6c-9775-a0203728f483" containerID="ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b" exitCode=0 Nov 24 22:37:45 crc kubenswrapper[4801]: I1124 22:37:45.359605 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerDied","Data":"ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b"} Nov 24 22:37:46 crc kubenswrapper[4801]: I1124 22:37:46.372655 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerStarted","Data":"10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7"} Nov 24 22:37:46 crc kubenswrapper[4801]: I1124 22:37:46.404090 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2kbj" podStartSLOduration=2.656407697 podStartE2EDuration="8.404064924s" podCreationTimestamp="2025-11-24 22:37:38 +0000 UTC" firstStartedPulling="2025-11-24 22:37:40.301279556 +0000 UTC m=+5432.383866226" lastFinishedPulling="2025-11-24 22:37:46.048936783 +0000 UTC m=+5438.131523453" observedRunningTime="2025-11-24 22:37:46.397075878 +0000 UTC m=+5438.479662538" watchObservedRunningTime="2025-11-24 22:37:46.404064924 +0000 UTC m=+5438.486651584" Nov 24 22:37:49 crc kubenswrapper[4801]: I1124 22:37:49.165443 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:49 crc kubenswrapper[4801]: I1124 22:37:49.166053 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:37:50 crc kubenswrapper[4801]: I1124 22:37:50.268399 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2kbj" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" probeResult="failure" output=< Nov 24 22:37:50 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:37:50 crc kubenswrapper[4801]: > Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.319553 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.320194 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.320259 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.321558 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4e7f9e05deb22e46632041a6e4bb9c24330b58908343670c6d572cd01824154"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.321634 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://c4e7f9e05deb22e46632041a6e4bb9c24330b58908343670c6d572cd01824154" gracePeriod=600 Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.469603 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="c4e7f9e05deb22e46632041a6e4bb9c24330b58908343670c6d572cd01824154" exitCode=0 Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.469668 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"c4e7f9e05deb22e46632041a6e4bb9c24330b58908343670c6d572cd01824154"} Nov 24 22:37:54 crc kubenswrapper[4801]: I1124 22:37:54.469756 4801 scope.go:117] "RemoveContainer" containerID="82c569773c74dde16f603933d21339502804328eb0a66885151fbc0919088fdf" Nov 24 22:37:55 crc kubenswrapper[4801]: I1124 22:37:55.490770 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e"} Nov 24 22:38:00 crc kubenswrapper[4801]: I1124 22:38:00.233043 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2kbj" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" probeResult="failure" output=< Nov 24 22:38:00 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:38:00 crc kubenswrapper[4801]: > Nov 24 22:38:10 crc kubenswrapper[4801]: I1124 22:38:10.227147 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2kbj" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" probeResult="failure" output=< Nov 24 22:38:10 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:38:10 crc kubenswrapper[4801]: > Nov 24 22:38:19 crc kubenswrapper[4801]: I1124 22:38:19.251988 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:38:19 crc kubenswrapper[4801]: I1124 22:38:19.311284 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:38:19 crc kubenswrapper[4801]: I1124 22:38:19.500617 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2kbj"] Nov 24 22:38:20 crc kubenswrapper[4801]: I1124 22:38:20.839171 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2kbj" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" containerID="cri-o://10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7" gracePeriod=2 Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.626089 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.695032 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8btm\" (UniqueName: \"kubernetes.io/projected/636bf702-8b35-4a6c-9775-a0203728f483-kube-api-access-r8btm\") pod \"636bf702-8b35-4a6c-9775-a0203728f483\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.695408 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-utilities\") pod \"636bf702-8b35-4a6c-9775-a0203728f483\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.695535 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-catalog-content\") pod \"636bf702-8b35-4a6c-9775-a0203728f483\" (UID: \"636bf702-8b35-4a6c-9775-a0203728f483\") " Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.696200 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-utilities" (OuterVolumeSpecName: "utilities") pod "636bf702-8b35-4a6c-9775-a0203728f483" (UID: "636bf702-8b35-4a6c-9775-a0203728f483"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.710453 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636bf702-8b35-4a6c-9775-a0203728f483-kube-api-access-r8btm" (OuterVolumeSpecName: "kube-api-access-r8btm") pod "636bf702-8b35-4a6c-9775-a0203728f483" (UID: "636bf702-8b35-4a6c-9775-a0203728f483"). InnerVolumeSpecName "kube-api-access-r8btm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.798423 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.798789 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8btm\" (UniqueName: \"kubernetes.io/projected/636bf702-8b35-4a6c-9775-a0203728f483-kube-api-access-r8btm\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.799686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "636bf702-8b35-4a6c-9775-a0203728f483" (UID: "636bf702-8b35-4a6c-9775-a0203728f483"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.854184 4801 generic.go:334] "Generic (PLEG): container finished" podID="636bf702-8b35-4a6c-9775-a0203728f483" containerID="10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7" exitCode=0 Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.854233 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerDied","Data":"10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7"} Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.854243 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2kbj" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.854265 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2kbj" event={"ID":"636bf702-8b35-4a6c-9775-a0203728f483","Type":"ContainerDied","Data":"42cbbc2a58a3079e4941e555502645096d674fe47871d2743836e97194d52488"} Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.854283 4801 scope.go:117] "RemoveContainer" containerID="10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.889155 4801 scope.go:117] "RemoveContainer" containerID="ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.897619 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2kbj"] Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.901828 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636bf702-8b35-4a6c-9775-a0203728f483-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.908778 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2kbj"] Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.925287 4801 scope.go:117] "RemoveContainer" containerID="837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.969485 4801 scope.go:117] "RemoveContainer" containerID="10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7" Nov 24 22:38:21 crc kubenswrapper[4801]: E1124 22:38:21.971058 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7\": container with ID starting with 10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7 not found: ID does not exist" containerID="10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.971099 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7"} err="failed to get container status \"10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7\": rpc error: code = NotFound desc = could not find container \"10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7\": container with ID starting with 10662e207cbe53c71240ab0ec693803a4143e183ce4c9fcc7288ef97732831b7 not found: ID does not exist" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.971131 4801 scope.go:117] "RemoveContainer" containerID="ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b" Nov 24 22:38:21 crc kubenswrapper[4801]: E1124 22:38:21.971824 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b\": container with ID starting with ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b not found: ID does not exist" containerID="ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.971863 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b"} err="failed to get container status \"ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b\": rpc error: code = NotFound desc = could not find container \"ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b\": container with ID starting with ecaf4b31b7478f812849a07d3c1091f66569ee4a90b596e6fc03872871771d8b not found: ID does not exist" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.971881 4801 scope.go:117] "RemoveContainer" containerID="837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52" Nov 24 22:38:21 crc kubenswrapper[4801]: E1124 22:38:21.972353 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52\": container with ID starting with 837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52 not found: ID does not exist" containerID="837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52" Nov 24 22:38:21 crc kubenswrapper[4801]: I1124 22:38:21.972425 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52"} err="failed to get container status \"837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52\": rpc error: code = NotFound desc = could not find container \"837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52\": container with ID starting with 837e6d9bd726468f5f94d2a7fa2fb24ddc472dbc93e7688948fe97eec67a1d52 not found: ID does not exist" Nov 24 22:38:22 crc kubenswrapper[4801]: I1124 22:38:22.679543 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636bf702-8b35-4a6c-9775-a0203728f483" path="/var/lib/kubelet/pods/636bf702-8b35-4a6c-9775-a0203728f483/volumes" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.806693 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rcf6c"] Nov 24 22:38:31 crc kubenswrapper[4801]: E1124 22:38:31.809684 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="extract-utilities" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.809850 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="extract-utilities" Nov 24 22:38:31 crc kubenswrapper[4801]: E1124 22:38:31.810010 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="extract-content" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.810121 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="extract-content" Nov 24 22:38:31 crc kubenswrapper[4801]: E1124 22:38:31.810259 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.810386 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.810900 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="636bf702-8b35-4a6c-9775-a0203728f483" containerName="registry-server" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.814075 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.823747 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcf6c"] Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.890531 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-utilities\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.890613 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-catalog-content\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.890686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnczr\" (UniqueName: \"kubernetes.io/projected/06069605-6911-4836-a59f-25d2a3dce9c9-kube-api-access-nnczr\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.993136 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-utilities\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.993683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-utilities\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.993844 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-catalog-content\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.994019 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnczr\" (UniqueName: \"kubernetes.io/projected/06069605-6911-4836-a59f-25d2a3dce9c9-kube-api-access-nnczr\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:31 crc kubenswrapper[4801]: I1124 22:38:31.994585 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-catalog-content\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:32 crc kubenswrapper[4801]: I1124 22:38:32.021792 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnczr\" (UniqueName: \"kubernetes.io/projected/06069605-6911-4836-a59f-25d2a3dce9c9-kube-api-access-nnczr\") pod \"certified-operators-rcf6c\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:32 crc kubenswrapper[4801]: I1124 22:38:32.149124 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:32 crc kubenswrapper[4801]: I1124 22:38:32.647144 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcf6c"] Nov 24 22:38:33 crc kubenswrapper[4801]: I1124 22:38:33.005132 4801 generic.go:334] "Generic (PLEG): container finished" podID="06069605-6911-4836-a59f-25d2a3dce9c9" containerID="7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5" exitCode=0 Nov 24 22:38:33 crc kubenswrapper[4801]: I1124 22:38:33.005239 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerDied","Data":"7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5"} Nov 24 22:38:33 crc kubenswrapper[4801]: I1124 22:38:33.006051 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerStarted","Data":"9d368eee282ef6b23902d361981710df6cf65a7a6490902531410bc3362289ef"} Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.208936 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bwxjr"] Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.214102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.232533 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwxjr"] Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.265805 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-utilities\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.266139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-catalog-content\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.266239 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpnv\" (UniqueName: \"kubernetes.io/projected/2938f2c6-0a03-40d4-8e36-66a5735b3b49-kube-api-access-mrpnv\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.368717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-utilities\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.368825 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-catalog-content\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.368854 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrpnv\" (UniqueName: \"kubernetes.io/projected/2938f2c6-0a03-40d4-8e36-66a5735b3b49-kube-api-access-mrpnv\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.369318 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-utilities\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.370787 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-catalog-content\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.397038 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrpnv\" (UniqueName: \"kubernetes.io/projected/2938f2c6-0a03-40d4-8e36-66a5735b3b49-kube-api-access-mrpnv\") pod \"redhat-marketplace-bwxjr\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:34 crc kubenswrapper[4801]: I1124 22:38:34.548147 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:35 crc kubenswrapper[4801]: I1124 22:38:35.030485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerStarted","Data":"b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53"} Nov 24 22:38:35 crc kubenswrapper[4801]: I1124 22:38:35.117803 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwxjr"] Nov 24 22:38:35 crc kubenswrapper[4801]: W1124 22:38:35.118541 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2938f2c6_0a03_40d4_8e36_66a5735b3b49.slice/crio-375c022feb65ecaadae490b74d972c1c76fe32d5b364efbfd666481f6dd7be9f WatchSource:0}: Error finding container 375c022feb65ecaadae490b74d972c1c76fe32d5b364efbfd666481f6dd7be9f: Status 404 returned error can't find the container with id 375c022feb65ecaadae490b74d972c1c76fe32d5b364efbfd666481f6dd7be9f Nov 24 22:38:36 crc kubenswrapper[4801]: I1124 22:38:36.045512 4801 generic.go:334] "Generic (PLEG): container finished" podID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerID="a5c87a5b87b76a75ffbb54865e536f266e37d8c5843fb5cbc812e5b33c39fc72" exitCode=0 Nov 24 22:38:36 crc kubenswrapper[4801]: I1124 22:38:36.045864 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerDied","Data":"a5c87a5b87b76a75ffbb54865e536f266e37d8c5843fb5cbc812e5b33c39fc72"} Nov 24 22:38:36 crc kubenswrapper[4801]: I1124 22:38:36.046138 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerStarted","Data":"375c022feb65ecaadae490b74d972c1c76fe32d5b364efbfd666481f6dd7be9f"} Nov 24 22:38:37 crc kubenswrapper[4801]: I1124 22:38:37.058594 4801 generic.go:334] "Generic (PLEG): container finished" podID="06069605-6911-4836-a59f-25d2a3dce9c9" containerID="b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53" exitCode=0 Nov 24 22:38:37 crc kubenswrapper[4801]: I1124 22:38:37.058729 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerDied","Data":"b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53"} Nov 24 22:38:37 crc kubenswrapper[4801]: I1124 22:38:37.062958 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerStarted","Data":"38e470f7b7af60e6efe7a6bf63c14d738ecff272f801e31c5b62ee680ef14775"} Nov 24 22:38:38 crc kubenswrapper[4801]: I1124 22:38:38.084875 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerStarted","Data":"142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca"} Nov 24 22:38:38 crc kubenswrapper[4801]: I1124 22:38:38.091190 4801 generic.go:334] "Generic (PLEG): container finished" podID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerID="38e470f7b7af60e6efe7a6bf63c14d738ecff272f801e31c5b62ee680ef14775" exitCode=0 Nov 24 22:38:38 crc kubenswrapper[4801]: I1124 22:38:38.091255 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerDied","Data":"38e470f7b7af60e6efe7a6bf63c14d738ecff272f801e31c5b62ee680ef14775"} Nov 24 22:38:38 crc kubenswrapper[4801]: I1124 22:38:38.111166 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rcf6c" podStartSLOduration=2.537659314 podStartE2EDuration="7.111137169s" podCreationTimestamp="2025-11-24 22:38:31 +0000 UTC" firstStartedPulling="2025-11-24 22:38:33.007770424 +0000 UTC m=+5485.090357094" lastFinishedPulling="2025-11-24 22:38:37.581248279 +0000 UTC m=+5489.663834949" observedRunningTime="2025-11-24 22:38:38.104493662 +0000 UTC m=+5490.187080372" watchObservedRunningTime="2025-11-24 22:38:38.111137169 +0000 UTC m=+5490.193723869" Nov 24 22:38:39 crc kubenswrapper[4801]: I1124 22:38:39.109137 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerStarted","Data":"96fc444933491fda8009b76987e2921442075c65d83bf1e3ada2047da11065d3"} Nov 24 22:38:39 crc kubenswrapper[4801]: I1124 22:38:39.130891 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bwxjr" podStartSLOduration=2.5777556280000002 podStartE2EDuration="5.130866524s" podCreationTimestamp="2025-11-24 22:38:34 +0000 UTC" firstStartedPulling="2025-11-24 22:38:36.04858023 +0000 UTC m=+5488.131166900" lastFinishedPulling="2025-11-24 22:38:38.601691126 +0000 UTC m=+5490.684277796" observedRunningTime="2025-11-24 22:38:39.125054363 +0000 UTC m=+5491.207641043" watchObservedRunningTime="2025-11-24 22:38:39.130866524 +0000 UTC m=+5491.213453204" Nov 24 22:38:42 crc kubenswrapper[4801]: I1124 22:38:42.149934 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:42 crc kubenswrapper[4801]: I1124 22:38:42.150712 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:43 crc kubenswrapper[4801]: I1124 22:38:43.203656 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rcf6c" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="registry-server" probeResult="failure" output=< Nov 24 22:38:43 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:38:43 crc kubenswrapper[4801]: > Nov 24 22:38:44 crc kubenswrapper[4801]: I1124 22:38:44.549191 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:44 crc kubenswrapper[4801]: I1124 22:38:44.549820 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:44 crc kubenswrapper[4801]: I1124 22:38:44.628083 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:45 crc kubenswrapper[4801]: I1124 22:38:45.757263 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:45 crc kubenswrapper[4801]: I1124 22:38:45.827186 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwxjr"] Nov 24 22:38:47 crc kubenswrapper[4801]: I1124 22:38:47.207105 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bwxjr" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="registry-server" containerID="cri-o://96fc444933491fda8009b76987e2921442075c65d83bf1e3ada2047da11065d3" gracePeriod=2 Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.231261 4801 generic.go:334] "Generic (PLEG): container finished" podID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerID="96fc444933491fda8009b76987e2921442075c65d83bf1e3ada2047da11065d3" exitCode=0 Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.231391 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerDied","Data":"96fc444933491fda8009b76987e2921442075c65d83bf1e3ada2047da11065d3"} Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.231704 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwxjr" event={"ID":"2938f2c6-0a03-40d4-8e36-66a5735b3b49","Type":"ContainerDied","Data":"375c022feb65ecaadae490b74d972c1c76fe32d5b364efbfd666481f6dd7be9f"} Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.231722 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375c022feb65ecaadae490b74d972c1c76fe32d5b364efbfd666481f6dd7be9f" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.260178 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.397204 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrpnv\" (UniqueName: \"kubernetes.io/projected/2938f2c6-0a03-40d4-8e36-66a5735b3b49-kube-api-access-mrpnv\") pod \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.397716 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-utilities\") pod \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.397806 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-catalog-content\") pod \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\" (UID: \"2938f2c6-0a03-40d4-8e36-66a5735b3b49\") " Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.405262 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2938f2c6-0a03-40d4-8e36-66a5735b3b49-kube-api-access-mrpnv" (OuterVolumeSpecName: "kube-api-access-mrpnv") pod "2938f2c6-0a03-40d4-8e36-66a5735b3b49" (UID: "2938f2c6-0a03-40d4-8e36-66a5735b3b49"). InnerVolumeSpecName "kube-api-access-mrpnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.415707 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-utilities" (OuterVolumeSpecName: "utilities") pod "2938f2c6-0a03-40d4-8e36-66a5735b3b49" (UID: "2938f2c6-0a03-40d4-8e36-66a5735b3b49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.432887 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2938f2c6-0a03-40d4-8e36-66a5735b3b49" (UID: "2938f2c6-0a03-40d4-8e36-66a5735b3b49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.501120 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.501170 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2938f2c6-0a03-40d4-8e36-66a5735b3b49-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:49 crc kubenswrapper[4801]: I1124 22:38:49.501192 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrpnv\" (UniqueName: \"kubernetes.io/projected/2938f2c6-0a03-40d4-8e36-66a5735b3b49-kube-api-access-mrpnv\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:50 crc kubenswrapper[4801]: I1124 22:38:50.247789 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwxjr" Nov 24 22:38:50 crc kubenswrapper[4801]: I1124 22:38:50.293041 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwxjr"] Nov 24 22:38:50 crc kubenswrapper[4801]: I1124 22:38:50.304532 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwxjr"] Nov 24 22:38:50 crc kubenswrapper[4801]: I1124 22:38:50.690935 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" path="/var/lib/kubelet/pods/2938f2c6-0a03-40d4-8e36-66a5735b3b49/volumes" Nov 24 22:38:52 crc kubenswrapper[4801]: I1124 22:38:52.249982 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:52 crc kubenswrapper[4801]: I1124 22:38:52.326077 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:52 crc kubenswrapper[4801]: I1124 22:38:52.515066 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcf6c"] Nov 24 22:38:53 crc kubenswrapper[4801]: I1124 22:38:53.313152 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rcf6c" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="registry-server" containerID="cri-o://142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca" gracePeriod=2 Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.041861 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.151152 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnczr\" (UniqueName: \"kubernetes.io/projected/06069605-6911-4836-a59f-25d2a3dce9c9-kube-api-access-nnczr\") pod \"06069605-6911-4836-a59f-25d2a3dce9c9\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.151308 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-utilities\") pod \"06069605-6911-4836-a59f-25d2a3dce9c9\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.151349 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-catalog-content\") pod \"06069605-6911-4836-a59f-25d2a3dce9c9\" (UID: \"06069605-6911-4836-a59f-25d2a3dce9c9\") " Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.153793 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-utilities" (OuterVolumeSpecName: "utilities") pod "06069605-6911-4836-a59f-25d2a3dce9c9" (UID: "06069605-6911-4836-a59f-25d2a3dce9c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.160811 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06069605-6911-4836-a59f-25d2a3dce9c9-kube-api-access-nnczr" (OuterVolumeSpecName: "kube-api-access-nnczr") pod "06069605-6911-4836-a59f-25d2a3dce9c9" (UID: "06069605-6911-4836-a59f-25d2a3dce9c9"). InnerVolumeSpecName "kube-api-access-nnczr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.230290 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06069605-6911-4836-a59f-25d2a3dce9c9" (UID: "06069605-6911-4836-a59f-25d2a3dce9c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.255648 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.255694 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06069605-6911-4836-a59f-25d2a3dce9c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.255712 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnczr\" (UniqueName: \"kubernetes.io/projected/06069605-6911-4836-a59f-25d2a3dce9c9-kube-api-access-nnczr\") on node \"crc\" DevicePath \"\"" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.336299 4801 generic.go:334] "Generic (PLEG): container finished" podID="06069605-6911-4836-a59f-25d2a3dce9c9" containerID="142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca" exitCode=0 Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.336396 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerDied","Data":"142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca"} Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.336429 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcf6c" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.336463 4801 scope.go:117] "RemoveContainer" containerID="142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.336442 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcf6c" event={"ID":"06069605-6911-4836-a59f-25d2a3dce9c9","Type":"ContainerDied","Data":"9d368eee282ef6b23902d361981710df6cf65a7a6490902531410bc3362289ef"} Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.384978 4801 scope.go:117] "RemoveContainer" containerID="b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.407767 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcf6c"] Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.426747 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rcf6c"] Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.427991 4801 scope.go:117] "RemoveContainer" containerID="7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.499938 4801 scope.go:117] "RemoveContainer" containerID="142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca" Nov 24 22:38:54 crc kubenswrapper[4801]: E1124 22:38:54.500882 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca\": container with ID starting with 142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca not found: ID does not exist" containerID="142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.500928 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca"} err="failed to get container status \"142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca\": rpc error: code = NotFound desc = could not find container \"142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca\": container with ID starting with 142d214615bd480863c67aa9025cc44e6471844ae5e1047324d2cbf6029a98ca not found: ID does not exist" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.500958 4801 scope.go:117] "RemoveContainer" containerID="b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53" Nov 24 22:38:54 crc kubenswrapper[4801]: E1124 22:38:54.501516 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53\": container with ID starting with b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53 not found: ID does not exist" containerID="b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.501594 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53"} err="failed to get container status \"b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53\": rpc error: code = NotFound desc = could not find container \"b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53\": container with ID starting with b5223780f93f055a115c69282ac2b55b52f19afa3f4566a229e8a8ba0f296e53 not found: ID does not exist" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.501645 4801 scope.go:117] "RemoveContainer" containerID="7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5" Nov 24 22:38:54 crc kubenswrapper[4801]: E1124 22:38:54.502291 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5\": container with ID starting with 7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5 not found: ID does not exist" containerID="7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.502338 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5"} err="failed to get container status \"7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5\": rpc error: code = NotFound desc = could not find container \"7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5\": container with ID starting with 7ec009e5d61334e563b7629e69700bd70b12615b8d0a56812f176284c71632a5 not found: ID does not exist" Nov 24 22:38:54 crc kubenswrapper[4801]: I1124 22:38:54.682462 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" path="/var/lib/kubelet/pods/06069605-6911-4836-a59f-25d2a3dce9c9/volumes" Nov 24 22:39:54 crc kubenswrapper[4801]: I1124 22:39:54.320325 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:39:54 crc kubenswrapper[4801]: I1124 22:39:54.321008 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:40:24 crc kubenswrapper[4801]: I1124 22:40:24.319524 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:40:24 crc kubenswrapper[4801]: I1124 22:40:24.320177 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.319766 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.320421 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.320525 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.321767 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.321858 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" gracePeriod=600 Nov 24 22:40:54 crc kubenswrapper[4801]: E1124 22:40:54.468090 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.754685 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" exitCode=0 Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.754749 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e"} Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.754803 4801 scope.go:117] "RemoveContainer" containerID="c4e7f9e05deb22e46632041a6e4bb9c24330b58908343670c6d572cd01824154" Nov 24 22:40:54 crc kubenswrapper[4801]: I1124 22:40:54.755519 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:40:54 crc kubenswrapper[4801]: E1124 22:40:54.756066 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.159530 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-drc2n"] Nov 24 22:41:05 crc kubenswrapper[4801]: E1124 22:41:05.160770 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="extract-content" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.160786 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="extract-content" Nov 24 22:41:05 crc kubenswrapper[4801]: E1124 22:41:05.160810 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="registry-server" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.160818 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="registry-server" Nov 24 22:41:05 crc kubenswrapper[4801]: E1124 22:41:05.160842 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="extract-utilities" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.160852 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="extract-utilities" Nov 24 22:41:05 crc kubenswrapper[4801]: E1124 22:41:05.160873 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="extract-utilities" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.160879 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="extract-utilities" Nov 24 22:41:05 crc kubenswrapper[4801]: E1124 22:41:05.160897 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="registry-server" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.160905 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="registry-server" Nov 24 22:41:05 crc kubenswrapper[4801]: E1124 22:41:05.160931 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="extract-content" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.160939 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="extract-content" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.161242 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="06069605-6911-4836-a59f-25d2a3dce9c9" containerName="registry-server" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.161270 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2938f2c6-0a03-40d4-8e36-66a5735b3b49" containerName="registry-server" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.163223 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.174885 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-drc2n"] Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.296800 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-utilities\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.296933 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdkzl\" (UniqueName: \"kubernetes.io/projected/2056582d-ac39-45da-8ef7-54a9a72bd594-kube-api-access-tdkzl\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.296977 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-catalog-content\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.399797 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdkzl\" (UniqueName: \"kubernetes.io/projected/2056582d-ac39-45da-8ef7-54a9a72bd594-kube-api-access-tdkzl\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.399903 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-catalog-content\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.400089 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-utilities\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.400365 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-catalog-content\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.400523 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-utilities\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.420886 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdkzl\" (UniqueName: \"kubernetes.io/projected/2056582d-ac39-45da-8ef7-54a9a72bd594-kube-api-access-tdkzl\") pod \"community-operators-drc2n\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:05 crc kubenswrapper[4801]: I1124 22:41:05.494839 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:06 crc kubenswrapper[4801]: I1124 22:41:06.045850 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-drc2n"] Nov 24 22:41:06 crc kubenswrapper[4801]: I1124 22:41:06.927435 4801 generic.go:334] "Generic (PLEG): container finished" podID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerID="b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c" exitCode=0 Nov 24 22:41:06 crc kubenswrapper[4801]: I1124 22:41:06.927614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerDied","Data":"b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c"} Nov 24 22:41:06 crc kubenswrapper[4801]: I1124 22:41:06.927906 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerStarted","Data":"0cda51bb55bf30fe0498e494c40aced4d605408beea30f5701d2925f9d11fd9c"} Nov 24 22:41:07 crc kubenswrapper[4801]: I1124 22:41:07.943476 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerStarted","Data":"59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4"} Nov 24 22:41:08 crc kubenswrapper[4801]: I1124 22:41:08.675662 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:41:08 crc kubenswrapper[4801]: E1124 22:41:08.676340 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:41:09 crc kubenswrapper[4801]: I1124 22:41:09.977369 4801 generic.go:334] "Generic (PLEG): container finished" podID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerID="59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4" exitCode=0 Nov 24 22:41:09 crc kubenswrapper[4801]: I1124 22:41:09.977514 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerDied","Data":"59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4"} Nov 24 22:41:10 crc kubenswrapper[4801]: I1124 22:41:10.990678 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerStarted","Data":"fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485"} Nov 24 22:41:11 crc kubenswrapper[4801]: I1124 22:41:11.018906 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-drc2n" podStartSLOduration=2.461616695 podStartE2EDuration="6.018882052s" podCreationTimestamp="2025-11-24 22:41:05 +0000 UTC" firstStartedPulling="2025-11-24 22:41:06.932352224 +0000 UTC m=+5639.014938904" lastFinishedPulling="2025-11-24 22:41:10.489617591 +0000 UTC m=+5642.572204261" observedRunningTime="2025-11-24 22:41:11.016506108 +0000 UTC m=+5643.099092798" watchObservedRunningTime="2025-11-24 22:41:11.018882052 +0000 UTC m=+5643.101468732" Nov 24 22:41:15 crc kubenswrapper[4801]: I1124 22:41:15.497497 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:15 crc kubenswrapper[4801]: I1124 22:41:15.498330 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:15 crc kubenswrapper[4801]: I1124 22:41:15.599157 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:16 crc kubenswrapper[4801]: I1124 22:41:16.131506 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:16 crc kubenswrapper[4801]: I1124 22:41:16.209260 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-drc2n"] Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.077959 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-drc2n" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="registry-server" containerID="cri-o://fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485" gracePeriod=2 Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.634845 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.764760 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-utilities\") pod \"2056582d-ac39-45da-8ef7-54a9a72bd594\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.764892 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-catalog-content\") pod \"2056582d-ac39-45da-8ef7-54a9a72bd594\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.765198 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdkzl\" (UniqueName: \"kubernetes.io/projected/2056582d-ac39-45da-8ef7-54a9a72bd594-kube-api-access-tdkzl\") pod \"2056582d-ac39-45da-8ef7-54a9a72bd594\" (UID: \"2056582d-ac39-45da-8ef7-54a9a72bd594\") " Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.766600 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-utilities" (OuterVolumeSpecName: "utilities") pod "2056582d-ac39-45da-8ef7-54a9a72bd594" (UID: "2056582d-ac39-45da-8ef7-54a9a72bd594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.779747 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2056582d-ac39-45da-8ef7-54a9a72bd594-kube-api-access-tdkzl" (OuterVolumeSpecName: "kube-api-access-tdkzl") pod "2056582d-ac39-45da-8ef7-54a9a72bd594" (UID: "2056582d-ac39-45da-8ef7-54a9a72bd594"). InnerVolumeSpecName "kube-api-access-tdkzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.830344 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2056582d-ac39-45da-8ef7-54a9a72bd594" (UID: "2056582d-ac39-45da-8ef7-54a9a72bd594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.872677 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.872718 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2056582d-ac39-45da-8ef7-54a9a72bd594-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:41:18 crc kubenswrapper[4801]: I1124 22:41:18.872734 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdkzl\" (UniqueName: \"kubernetes.io/projected/2056582d-ac39-45da-8ef7-54a9a72bd594-kube-api-access-tdkzl\") on node \"crc\" DevicePath \"\"" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.096964 4801 generic.go:334] "Generic (PLEG): container finished" podID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerID="fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485" exitCode=0 Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.097004 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drc2n" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.097030 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerDied","Data":"fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485"} Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.098790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drc2n" event={"ID":"2056582d-ac39-45da-8ef7-54a9a72bd594","Type":"ContainerDied","Data":"0cda51bb55bf30fe0498e494c40aced4d605408beea30f5701d2925f9d11fd9c"} Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.098828 4801 scope.go:117] "RemoveContainer" containerID="fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.132747 4801 scope.go:117] "RemoveContainer" containerID="59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.139111 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-drc2n"] Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.150265 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-drc2n"] Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.185124 4801 scope.go:117] "RemoveContainer" containerID="b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.256574 4801 scope.go:117] "RemoveContainer" containerID="fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485" Nov 24 22:41:19 crc kubenswrapper[4801]: E1124 22:41:19.259384 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485\": container with ID starting with fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485 not found: ID does not exist" containerID="fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.259471 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485"} err="failed to get container status \"fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485\": rpc error: code = NotFound desc = could not find container \"fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485\": container with ID starting with fffa13744b6ed6df02ce24cb295db189cacbac7450ba57d23272fd7fd9091485 not found: ID does not exist" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.259500 4801 scope.go:117] "RemoveContainer" containerID="59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4" Nov 24 22:41:19 crc kubenswrapper[4801]: E1124 22:41:19.259985 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4\": container with ID starting with 59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4 not found: ID does not exist" containerID="59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.260045 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4"} err="failed to get container status \"59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4\": rpc error: code = NotFound desc = could not find container \"59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4\": container with ID starting with 59df733fbbcd40deb2b4242dd6b1c7aca25f2e3929e33b74c98395f350dfdce4 not found: ID does not exist" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.260096 4801 scope.go:117] "RemoveContainer" containerID="b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c" Nov 24 22:41:19 crc kubenswrapper[4801]: E1124 22:41:19.262213 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c\": container with ID starting with b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c not found: ID does not exist" containerID="b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.262281 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c"} err="failed to get container status \"b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c\": rpc error: code = NotFound desc = could not find container \"b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c\": container with ID starting with b577f3e18e4b40fe93477b4095518be284062ab25c242bf0bff1bd1d8e3a595c not found: ID does not exist" Nov 24 22:41:19 crc kubenswrapper[4801]: I1124 22:41:19.665983 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:41:19 crc kubenswrapper[4801]: E1124 22:41:19.667021 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:41:20 crc kubenswrapper[4801]: I1124 22:41:20.687016 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" path="/var/lib/kubelet/pods/2056582d-ac39-45da-8ef7-54a9a72bd594/volumes" Nov 24 22:41:29 crc kubenswrapper[4801]: I1124 22:41:29.723583 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:41:29 crc kubenswrapper[4801]: I1124 22:41:29.728705 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="bd963d5f-9d48-4924-a44c-d3a97a3e6461" containerName="galera" probeResult="failure" output="command timed out" Nov 24 22:41:34 crc kubenswrapper[4801]: I1124 22:41:34.665047 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:41:34 crc kubenswrapper[4801]: E1124 22:41:34.666508 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:41:48 crc kubenswrapper[4801]: I1124 22:41:48.665524 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:41:48 crc kubenswrapper[4801]: E1124 22:41:48.666468 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:42:03 crc kubenswrapper[4801]: I1124 22:42:03.663957 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:42:03 crc kubenswrapper[4801]: E1124 22:42:03.664906 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:42:14 crc kubenswrapper[4801]: I1124 22:42:14.665522 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:42:14 crc kubenswrapper[4801]: E1124 22:42:14.666726 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:42:26 crc kubenswrapper[4801]: I1124 22:42:26.664735 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:42:26 crc kubenswrapper[4801]: E1124 22:42:26.666053 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:42:38 crc kubenswrapper[4801]: I1124 22:42:38.676982 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:42:38 crc kubenswrapper[4801]: E1124 22:42:38.678276 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:42:51 crc kubenswrapper[4801]: I1124 22:42:51.665685 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:42:51 crc kubenswrapper[4801]: E1124 22:42:51.667142 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:43:03 crc kubenswrapper[4801]: I1124 22:43:03.664581 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:43:03 crc kubenswrapper[4801]: E1124 22:43:03.665431 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:43:17 crc kubenswrapper[4801]: I1124 22:43:17.664652 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:43:17 crc kubenswrapper[4801]: E1124 22:43:17.665810 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:43:28 crc kubenswrapper[4801]: I1124 22:43:28.673800 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:43:28 crc kubenswrapper[4801]: E1124 22:43:28.674556 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:43:42 crc kubenswrapper[4801]: I1124 22:43:42.664974 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:43:42 crc kubenswrapper[4801]: E1124 22:43:42.666233 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:43:53 crc kubenswrapper[4801]: I1124 22:43:53.664688 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:43:53 crc kubenswrapper[4801]: E1124 22:43:53.665945 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:43:55 crc kubenswrapper[4801]: I1124 22:43:55.223344 4801 generic.go:334] "Generic (PLEG): container finished" podID="0b7936e0-8a45-4c32-a7a9-8323443c4274" containerID="26cc21e8b8ff03dca6dda5f2dd2c72c44101370b727d73d3a6e4f149f4e014da" exitCode=0 Nov 24 22:43:55 crc kubenswrapper[4801]: I1124 22:43:55.224013 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0b7936e0-8a45-4c32-a7a9-8323443c4274","Type":"ContainerDied","Data":"26cc21e8b8ff03dca6dda5f2dd2c72c44101370b727d73d3a6e4f149f4e014da"} Nov 24 22:43:56 crc kubenswrapper[4801]: I1124 22:43:56.820190 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.020571 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-workdir\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.020646 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config-secret\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.020726 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-temporary\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.021274 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.021322 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ssh-key\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.021415 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.021653 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twg4f\" (UniqueName: \"kubernetes.io/projected/0b7936e0-8a45-4c32-a7a9-8323443c4274-kube-api-access-twg4f\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.021756 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.022663 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ca-certs\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.022703 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-config-data\") pod \"0b7936e0-8a45-4c32-a7a9-8323443c4274\" (UID: \"0b7936e0-8a45-4c32-a7a9-8323443c4274\") " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.023708 4801 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.024493 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-config-data" (OuterVolumeSpecName: "config-data") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.028006 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7936e0-8a45-4c32-a7a9-8323443c4274-kube-api-access-twg4f" (OuterVolumeSpecName: "kube-api-access-twg4f") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "kube-api-access-twg4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.032949 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.033706 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.057941 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.061651 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.064228 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.090483 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0b7936e0-8a45-4c32-a7a9-8323443c4274" (UID: "0b7936e0-8a45-4c32-a7a9-8323443c4274"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125016 4801 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0b7936e0-8a45-4c32-a7a9-8323443c4274-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125062 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125076 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125671 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125705 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twg4f\" (UniqueName: \"kubernetes.io/projected/0b7936e0-8a45-4c32-a7a9-8323443c4274-kube-api-access-twg4f\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125726 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125743 4801 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0b7936e0-8a45-4c32-a7a9-8323443c4274-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.125763 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b7936e0-8a45-4c32-a7a9-8323443c4274-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.165008 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.228222 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.259866 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.259944 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0b7936e0-8a45-4c32-a7a9-8323443c4274","Type":"ContainerDied","Data":"1bc3375d55f3eaf0e7719a843ead3f75b3237d7d57da159c5c07ed617b2a529d"} Nov 24 22:43:57 crc kubenswrapper[4801]: I1124 22:43:57.260040 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc3375d55f3eaf0e7719a843ead3f75b3237d7d57da159c5c07ed617b2a529d" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.549743 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 22:44:02 crc kubenswrapper[4801]: E1124 22:44:02.551758 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="extract-utilities" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.551794 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="extract-utilities" Nov 24 22:44:02 crc kubenswrapper[4801]: E1124 22:44:02.551843 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7936e0-8a45-4c32-a7a9-8323443c4274" containerName="tempest-tests-tempest-tests-runner" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.551860 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7936e0-8a45-4c32-a7a9-8323443c4274" containerName="tempest-tests-tempest-tests-runner" Nov 24 22:44:02 crc kubenswrapper[4801]: E1124 22:44:02.551921 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="extract-content" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.551937 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="extract-content" Nov 24 22:44:02 crc kubenswrapper[4801]: E1124 22:44:02.552007 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="registry-server" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.552024 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="registry-server" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.552671 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7936e0-8a45-4c32-a7a9-8323443c4274" containerName="tempest-tests-tempest-tests-runner" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.552725 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2056582d-ac39-45da-8ef7-54a9a72bd594" containerName="registry-server" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.554748 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.562645 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bmhbq" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.565311 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.694603 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzqg\" (UniqueName: \"kubernetes.io/projected/7fc00445-cb02-4dc2-b308-df4d2696c0ad-kube-api-access-zxzqg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.695004 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.799137 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzqg\" (UniqueName: \"kubernetes.io/projected/7fc00445-cb02-4dc2-b308-df4d2696c0ad-kube-api-access-zxzqg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.799237 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.802324 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.849180 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzqg\" (UniqueName: \"kubernetes.io/projected/7fc00445-cb02-4dc2-b308-df4d2696c0ad-kube-api-access-zxzqg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.854918 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7fc00445-cb02-4dc2-b308-df4d2696c0ad\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:02 crc kubenswrapper[4801]: I1124 22:44:02.890356 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 22:44:03 crc kubenswrapper[4801]: I1124 22:44:03.393142 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 22:44:03 crc kubenswrapper[4801]: I1124 22:44:03.394058 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:44:04 crc kubenswrapper[4801]: I1124 22:44:04.354470 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7fc00445-cb02-4dc2-b308-df4d2696c0ad","Type":"ContainerStarted","Data":"fa139c02e63c3f663827d5fa61e12b7c81417ca7c7a500dbe519fec3911406ea"} Nov 24 22:44:05 crc kubenswrapper[4801]: I1124 22:44:05.370182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7fc00445-cb02-4dc2-b308-df4d2696c0ad","Type":"ContainerStarted","Data":"f579aff87ba222c479d671436fe8db0e9215680dc3b24b725b3ff611a8a276f4"} Nov 24 22:44:05 crc kubenswrapper[4801]: I1124 22:44:05.404440 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.278575057 podStartE2EDuration="3.404412259s" podCreationTimestamp="2025-11-24 22:44:02 +0000 UTC" firstStartedPulling="2025-11-24 22:44:03.393852875 +0000 UTC m=+5815.476439545" lastFinishedPulling="2025-11-24 22:44:04.519690077 +0000 UTC m=+5816.602276747" observedRunningTime="2025-11-24 22:44:05.395399039 +0000 UTC m=+5817.477985749" watchObservedRunningTime="2025-11-24 22:44:05.404412259 +0000 UTC m=+5817.486998959" Nov 24 22:44:08 crc kubenswrapper[4801]: I1124 22:44:08.676017 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:44:08 crc kubenswrapper[4801]: E1124 22:44:08.677283 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:44:22 crc kubenswrapper[4801]: I1124 22:44:22.666696 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:44:22 crc kubenswrapper[4801]: E1124 22:44:22.668400 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:44:34 crc kubenswrapper[4801]: I1124 22:44:34.664860 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:44:34 crc kubenswrapper[4801]: E1124 22:44:34.666459 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:44:48 crc kubenswrapper[4801]: I1124 22:44:48.671929 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:44:48 crc kubenswrapper[4801]: E1124 22:44:48.672639 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:44:54 crc kubenswrapper[4801]: I1124 22:44:54.244691 4801 scope.go:117] "RemoveContainer" containerID="a5c87a5b87b76a75ffbb54865e536f266e37d8c5843fb5cbc812e5b33c39fc72" Nov 24 22:44:54 crc kubenswrapper[4801]: I1124 22:44:54.841485 4801 scope.go:117] "RemoveContainer" containerID="38e470f7b7af60e6efe7a6bf63c14d738ecff272f801e31c5b62ee680ef14775" Nov 24 22:44:54 crc kubenswrapper[4801]: I1124 22:44:54.931499 4801 scope.go:117] "RemoveContainer" containerID="96fc444933491fda8009b76987e2921442075c65d83bf1e3ada2047da11065d3" Nov 24 22:44:59 crc kubenswrapper[4801]: I1124 22:44:59.665515 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:44:59 crc kubenswrapper[4801]: E1124 22:44:59.666331 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.186068 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76"] Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.189599 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.195104 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.195877 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.200125 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76"] Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.223681 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87a6db-167c-406f-8183-baed9b3ba987-config-volume\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.223879 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87a6db-167c-406f-8183-baed9b3ba987-secret-volume\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.224932 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sdf\" (UniqueName: \"kubernetes.io/projected/4a87a6db-167c-406f-8183-baed9b3ba987-kube-api-access-v4sdf\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.328225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87a6db-167c-406f-8183-baed9b3ba987-config-volume\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.328306 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87a6db-167c-406f-8183-baed9b3ba987-secret-volume\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.328530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4sdf\" (UniqueName: \"kubernetes.io/projected/4a87a6db-167c-406f-8183-baed9b3ba987-kube-api-access-v4sdf\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.329298 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87a6db-167c-406f-8183-baed9b3ba987-config-volume\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.336603 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87a6db-167c-406f-8183-baed9b3ba987-secret-volume\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.347844 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4sdf\" (UniqueName: \"kubernetes.io/projected/4a87a6db-167c-406f-8183-baed9b3ba987-kube-api-access-v4sdf\") pod \"collect-profiles-29400405-rlg76\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:00 crc kubenswrapper[4801]: I1124 22:45:00.530738 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:01 crc kubenswrapper[4801]: I1124 22:45:01.057507 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76"] Nov 24 22:45:01 crc kubenswrapper[4801]: I1124 22:45:01.209174 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" event={"ID":"4a87a6db-167c-406f-8183-baed9b3ba987","Type":"ContainerStarted","Data":"2bf0ac82c70bfc891546a3f9a7f4c985fdb791999ee6cb1b40e54b024fdf91e8"} Nov 24 22:45:02 crc kubenswrapper[4801]: I1124 22:45:02.222534 4801 generic.go:334] "Generic (PLEG): container finished" podID="4a87a6db-167c-406f-8183-baed9b3ba987" containerID="0c0350c2097a6eef18869d32d05b06c4e637d224693c13946d4e2f4baa5f0348" exitCode=0 Nov 24 22:45:02 crc kubenswrapper[4801]: I1124 22:45:02.222592 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" event={"ID":"4a87a6db-167c-406f-8183-baed9b3ba987","Type":"ContainerDied","Data":"0c0350c2097a6eef18869d32d05b06c4e637d224693c13946d4e2f4baa5f0348"} Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.761154 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vts7v/must-gather-hdwj6"] Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.767731 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.771766 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vts7v"/"kube-root-ca.crt" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.771940 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vts7v"/"default-dockercfg-s7nwj" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.772099 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vts7v"/"openshift-service-ca.crt" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.789275 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vts7v/must-gather-hdwj6"] Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.801926 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.956911 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87a6db-167c-406f-8183-baed9b3ba987-secret-volume\") pod \"4a87a6db-167c-406f-8183-baed9b3ba987\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.957013 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4sdf\" (UniqueName: \"kubernetes.io/projected/4a87a6db-167c-406f-8183-baed9b3ba987-kube-api-access-v4sdf\") pod \"4a87a6db-167c-406f-8183-baed9b3ba987\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.957218 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87a6db-167c-406f-8183-baed9b3ba987-config-volume\") pod \"4a87a6db-167c-406f-8183-baed9b3ba987\" (UID: \"4a87a6db-167c-406f-8183-baed9b3ba987\") " Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.957748 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff20195d-a4de-4ba9-afb0-0eae3710a84a-must-gather-output\") pod \"must-gather-hdwj6\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.957866 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gf8n\" (UniqueName: \"kubernetes.io/projected/ff20195d-a4de-4ba9-afb0-0eae3710a84a-kube-api-access-5gf8n\") pod \"must-gather-hdwj6\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.959130 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a87a6db-167c-406f-8183-baed9b3ba987-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a87a6db-167c-406f-8183-baed9b3ba987" (UID: "4a87a6db-167c-406f-8183-baed9b3ba987"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.964154 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a87a6db-167c-406f-8183-baed9b3ba987-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a87a6db-167c-406f-8183-baed9b3ba987" (UID: "4a87a6db-167c-406f-8183-baed9b3ba987"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4801]: I1124 22:45:03.964931 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a87a6db-167c-406f-8183-baed9b3ba987-kube-api-access-v4sdf" (OuterVolumeSpecName: "kube-api-access-v4sdf") pod "4a87a6db-167c-406f-8183-baed9b3ba987" (UID: "4a87a6db-167c-406f-8183-baed9b3ba987"). InnerVolumeSpecName "kube-api-access-v4sdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.060053 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gf8n\" (UniqueName: \"kubernetes.io/projected/ff20195d-a4de-4ba9-afb0-0eae3710a84a-kube-api-access-5gf8n\") pod \"must-gather-hdwj6\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.060282 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff20195d-a4de-4ba9-afb0-0eae3710a84a-must-gather-output\") pod \"must-gather-hdwj6\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.061230 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87a6db-167c-406f-8183-baed9b3ba987-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.061655 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff20195d-a4de-4ba9-afb0-0eae3710a84a-must-gather-output\") pod \"must-gather-hdwj6\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.061751 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87a6db-167c-406f-8183-baed9b3ba987-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.061794 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4sdf\" (UniqueName: \"kubernetes.io/projected/4a87a6db-167c-406f-8183-baed9b3ba987-kube-api-access-v4sdf\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.082984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gf8n\" (UniqueName: \"kubernetes.io/projected/ff20195d-a4de-4ba9-afb0-0eae3710a84a-kube-api-access-5gf8n\") pod \"must-gather-hdwj6\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.121407 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.263092 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.263692 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-rlg76" event={"ID":"4a87a6db-167c-406f-8183-baed9b3ba987","Type":"ContainerDied","Data":"2bf0ac82c70bfc891546a3f9a7f4c985fdb791999ee6cb1b40e54b024fdf91e8"} Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.263751 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf0ac82c70bfc891546a3f9a7f4c985fdb791999ee6cb1b40e54b024fdf91e8" Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.616258 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vts7v/must-gather-hdwj6"] Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.895920 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92"] Nov 24 22:45:04 crc kubenswrapper[4801]: I1124 22:45:04.910127 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-vgs92"] Nov 24 22:45:05 crc kubenswrapper[4801]: I1124 22:45:05.277935 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/must-gather-hdwj6" event={"ID":"ff20195d-a4de-4ba9-afb0-0eae3710a84a","Type":"ContainerStarted","Data":"e648d5039df6144a061056ab04ec1e587b3193bafe7b806a0c2c9cb08f6da83f"} Nov 24 22:45:06 crc kubenswrapper[4801]: I1124 22:45:06.689024 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1497ed80-3ec3-4646-8087-54baca8015da" path="/var/lib/kubelet/pods/1497ed80-3ec3-4646-8087-54baca8015da/volumes" Nov 24 22:45:11 crc kubenswrapper[4801]: I1124 22:45:11.392463 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/must-gather-hdwj6" event={"ID":"ff20195d-a4de-4ba9-afb0-0eae3710a84a","Type":"ContainerStarted","Data":"620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8"} Nov 24 22:45:11 crc kubenswrapper[4801]: I1124 22:45:11.393227 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/must-gather-hdwj6" event={"ID":"ff20195d-a4de-4ba9-afb0-0eae3710a84a","Type":"ContainerStarted","Data":"375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8"} Nov 24 22:45:11 crc kubenswrapper[4801]: I1124 22:45:11.430197 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vts7v/must-gather-hdwj6" podStartSLOduration=3.126895787 podStartE2EDuration="8.430173221s" podCreationTimestamp="2025-11-24 22:45:03 +0000 UTC" firstStartedPulling="2025-11-24 22:45:04.615277263 +0000 UTC m=+5876.697863923" lastFinishedPulling="2025-11-24 22:45:09.918554677 +0000 UTC m=+5882.001141357" observedRunningTime="2025-11-24 22:45:11.416802396 +0000 UTC m=+5883.499389106" watchObservedRunningTime="2025-11-24 22:45:11.430173221 +0000 UTC m=+5883.512759901" Nov 24 22:45:13 crc kubenswrapper[4801]: I1124 22:45:13.664322 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:45:13 crc kubenswrapper[4801]: E1124 22:45:13.664916 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.478008 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vts7v/crc-debug-lf58b"] Nov 24 22:45:16 crc kubenswrapper[4801]: E1124 22:45:16.479502 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a87a6db-167c-406f-8183-baed9b3ba987" containerName="collect-profiles" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.479527 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a87a6db-167c-406f-8183-baed9b3ba987" containerName="collect-profiles" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.479966 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a87a6db-167c-406f-8183-baed9b3ba987" containerName="collect-profiles" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.481053 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.633640 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-host\") pod \"crc-debug-lf58b\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.633812 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbnz\" (UniqueName: \"kubernetes.io/projected/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-kube-api-access-vqbnz\") pod \"crc-debug-lf58b\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.736580 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-host\") pod \"crc-debug-lf58b\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.736916 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbnz\" (UniqueName: \"kubernetes.io/projected/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-kube-api-access-vqbnz\") pod \"crc-debug-lf58b\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.739085 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-host\") pod \"crc-debug-lf58b\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.774473 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbnz\" (UniqueName: \"kubernetes.io/projected/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-kube-api-access-vqbnz\") pod \"crc-debug-lf58b\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:16 crc kubenswrapper[4801]: I1124 22:45:16.806536 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:45:17 crc kubenswrapper[4801]: I1124 22:45:17.470381 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-lf58b" event={"ID":"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8","Type":"ContainerStarted","Data":"5d5378ba54dbc1b41f10d01b7ad15d47a624323aca8c5f34749b2332a024a45e"} Nov 24 22:45:27 crc kubenswrapper[4801]: I1124 22:45:27.664899 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:45:27 crc kubenswrapper[4801]: E1124 22:45:27.666547 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:45:28 crc kubenswrapper[4801]: I1124 22:45:28.597116 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-lf58b" event={"ID":"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8","Type":"ContainerStarted","Data":"f9137d3eaddf9de9ee03c1afca69b60619d09b18432310c3b3c83204b3fb70df"} Nov 24 22:45:28 crc kubenswrapper[4801]: I1124 22:45:28.620284 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vts7v/crc-debug-lf58b" podStartSLOduration=1.651646757 podStartE2EDuration="12.62025911s" podCreationTimestamp="2025-11-24 22:45:16 +0000 UTC" firstStartedPulling="2025-11-24 22:45:16.874819736 +0000 UTC m=+5888.957406416" lastFinishedPulling="2025-11-24 22:45:27.843432089 +0000 UTC m=+5899.926018769" observedRunningTime="2025-11-24 22:45:28.608766853 +0000 UTC m=+5900.691353543" watchObservedRunningTime="2025-11-24 22:45:28.62025911 +0000 UTC m=+5900.702845780" Nov 24 22:45:41 crc kubenswrapper[4801]: I1124 22:45:41.665552 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:45:41 crc kubenswrapper[4801]: E1124 22:45:41.666648 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:45:52 crc kubenswrapper[4801]: I1124 22:45:52.664392 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:45:52 crc kubenswrapper[4801]: E1124 22:45:52.665476 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:45:55 crc kubenswrapper[4801]: I1124 22:45:55.037645 4801 scope.go:117] "RemoveContainer" containerID="2b93a35f1bac34d8f4eb45443e4a0c90ef8e9a0451b0cd4c611484f0659dad93" Nov 24 22:46:07 crc kubenswrapper[4801]: I1124 22:46:07.665684 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:46:08 crc kubenswrapper[4801]: I1124 22:46:08.147654 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"0317fc26fc78a80a41a408957484b54894b82c79a2597d92216148bda8ae01ac"} Nov 24 22:46:20 crc kubenswrapper[4801]: I1124 22:46:20.310042 4801 generic.go:334] "Generic (PLEG): container finished" podID="0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" containerID="f9137d3eaddf9de9ee03c1afca69b60619d09b18432310c3b3c83204b3fb70df" exitCode=0 Nov 24 22:46:20 crc kubenswrapper[4801]: I1124 22:46:20.310130 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-lf58b" event={"ID":"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8","Type":"ContainerDied","Data":"f9137d3eaddf9de9ee03c1afca69b60619d09b18432310c3b3c83204b3fb70df"} Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.489439 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.537814 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vts7v/crc-debug-lf58b"] Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.550964 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vts7v/crc-debug-lf58b"] Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.596447 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqbnz\" (UniqueName: \"kubernetes.io/projected/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-kube-api-access-vqbnz\") pod \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.597333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-host\") pod \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\" (UID: \"0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8\") " Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.597565 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-host" (OuterVolumeSpecName: "host") pod "0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" (UID: "0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.598258 4801 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-host\") on node \"crc\" DevicePath \"\"" Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.606516 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-kube-api-access-vqbnz" (OuterVolumeSpecName: "kube-api-access-vqbnz") pod "0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" (UID: "0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8"). InnerVolumeSpecName "kube-api-access-vqbnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:46:21 crc kubenswrapper[4801]: I1124 22:46:21.701075 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqbnz\" (UniqueName: \"kubernetes.io/projected/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8-kube-api-access-vqbnz\") on node \"crc\" DevicePath \"\"" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.339413 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d5378ba54dbc1b41f10d01b7ad15d47a624323aca8c5f34749b2332a024a45e" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.339468 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-lf58b" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.679039 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" path="/var/lib/kubelet/pods/0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8/volumes" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.736557 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vts7v/crc-debug-dgzvx"] Nov 24 22:46:22 crc kubenswrapper[4801]: E1124 22:46:22.737110 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" containerName="container-00" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.737128 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" containerName="container-00" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.737400 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb7ddc3-cbc4-467f-a363-ed51a5a70ee8" containerName="container-00" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.738227 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.830135 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfsg\" (UniqueName: \"kubernetes.io/projected/3a929aa0-4981-44fc-a918-1a36ace26c07-kube-api-access-5gfsg\") pod \"crc-debug-dgzvx\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.830209 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a929aa0-4981-44fc-a918-1a36ace26c07-host\") pod \"crc-debug-dgzvx\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.933253 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gfsg\" (UniqueName: \"kubernetes.io/projected/3a929aa0-4981-44fc-a918-1a36ace26c07-kube-api-access-5gfsg\") pod \"crc-debug-dgzvx\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.933322 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a929aa0-4981-44fc-a918-1a36ace26c07-host\") pod \"crc-debug-dgzvx\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.933507 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a929aa0-4981-44fc-a918-1a36ace26c07-host\") pod \"crc-debug-dgzvx\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:22 crc kubenswrapper[4801]: I1124 22:46:22.951526 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gfsg\" (UniqueName: \"kubernetes.io/projected/3a929aa0-4981-44fc-a918-1a36ace26c07-kube-api-access-5gfsg\") pod \"crc-debug-dgzvx\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:23 crc kubenswrapper[4801]: I1124 22:46:23.062895 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:23 crc kubenswrapper[4801]: I1124 22:46:23.351000 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" event={"ID":"3a929aa0-4981-44fc-a918-1a36ace26c07","Type":"ContainerStarted","Data":"88021ba12140262890b0c254ca7c1d3e5307d54c146e483bf749c14e2c09afde"} Nov 24 22:46:24 crc kubenswrapper[4801]: I1124 22:46:24.366693 4801 generic.go:334] "Generic (PLEG): container finished" podID="3a929aa0-4981-44fc-a918-1a36ace26c07" containerID="ecae779da0b36a86874e3266c70e446ea67ad2a5309fc9e563f29a8480874cb7" exitCode=0 Nov 24 22:46:24 crc kubenswrapper[4801]: I1124 22:46:24.366802 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" event={"ID":"3a929aa0-4981-44fc-a918-1a36ace26c07","Type":"ContainerDied","Data":"ecae779da0b36a86874e3266c70e446ea67ad2a5309fc9e563f29a8480874cb7"} Nov 24 22:46:25 crc kubenswrapper[4801]: I1124 22:46:25.942550 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.112669 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a929aa0-4981-44fc-a918-1a36ace26c07-host\") pod \"3a929aa0-4981-44fc-a918-1a36ace26c07\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.112951 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gfsg\" (UniqueName: \"kubernetes.io/projected/3a929aa0-4981-44fc-a918-1a36ace26c07-kube-api-access-5gfsg\") pod \"3a929aa0-4981-44fc-a918-1a36ace26c07\" (UID: \"3a929aa0-4981-44fc-a918-1a36ace26c07\") " Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.112739 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a929aa0-4981-44fc-a918-1a36ace26c07-host" (OuterVolumeSpecName: "host") pod "3a929aa0-4981-44fc-a918-1a36ace26c07" (UID: "3a929aa0-4981-44fc-a918-1a36ace26c07"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.113805 4801 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a929aa0-4981-44fc-a918-1a36ace26c07-host\") on node \"crc\" DevicePath \"\"" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.119762 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a929aa0-4981-44fc-a918-1a36ace26c07-kube-api-access-5gfsg" (OuterVolumeSpecName: "kube-api-access-5gfsg") pod "3a929aa0-4981-44fc-a918-1a36ace26c07" (UID: "3a929aa0-4981-44fc-a918-1a36ace26c07"). InnerVolumeSpecName "kube-api-access-5gfsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.215877 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gfsg\" (UniqueName: \"kubernetes.io/projected/3a929aa0-4981-44fc-a918-1a36ace26c07-kube-api-access-5gfsg\") on node \"crc\" DevicePath \"\"" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.386987 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" event={"ID":"3a929aa0-4981-44fc-a918-1a36ace26c07","Type":"ContainerDied","Data":"88021ba12140262890b0c254ca7c1d3e5307d54c146e483bf749c14e2c09afde"} Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.387037 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88021ba12140262890b0c254ca7c1d3e5307d54c146e483bf749c14e2c09afde" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.387073 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-dgzvx" Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.851595 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vts7v/crc-debug-dgzvx"] Nov 24 22:46:26 crc kubenswrapper[4801]: I1124 22:46:26.863934 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vts7v/crc-debug-dgzvx"] Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.057670 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vts7v/crc-debug-2jv67"] Nov 24 22:46:28 crc kubenswrapper[4801]: E1124 22:46:28.058647 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a929aa0-4981-44fc-a918-1a36ace26c07" containerName="container-00" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.058664 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a929aa0-4981-44fc-a918-1a36ace26c07" containerName="container-00" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.058923 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a929aa0-4981-44fc-a918-1a36ace26c07" containerName="container-00" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.059889 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.175194 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tns5q\" (UniqueName: \"kubernetes.io/projected/c46589b7-1561-43db-a8a5-11df1de0c789-kube-api-access-tns5q\") pod \"crc-debug-2jv67\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.175499 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c46589b7-1561-43db-a8a5-11df1de0c789-host\") pod \"crc-debug-2jv67\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.278091 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c46589b7-1561-43db-a8a5-11df1de0c789-host\") pod \"crc-debug-2jv67\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.278202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tns5q\" (UniqueName: \"kubernetes.io/projected/c46589b7-1561-43db-a8a5-11df1de0c789-kube-api-access-tns5q\") pod \"crc-debug-2jv67\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.278216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c46589b7-1561-43db-a8a5-11df1de0c789-host\") pod \"crc-debug-2jv67\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.309828 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tns5q\" (UniqueName: \"kubernetes.io/projected/c46589b7-1561-43db-a8a5-11df1de0c789-kube-api-access-tns5q\") pod \"crc-debug-2jv67\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.399659 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:28 crc kubenswrapper[4801]: W1124 22:46:28.447906 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46589b7_1561_43db_a8a5_11df1de0c789.slice/crio-94f0b63919671392d98c2bfda8f39a74ea6bfd9cbd4e256819d293d0382e2e34 WatchSource:0}: Error finding container 94f0b63919671392d98c2bfda8f39a74ea6bfd9cbd4e256819d293d0382e2e34: Status 404 returned error can't find the container with id 94f0b63919671392d98c2bfda8f39a74ea6bfd9cbd4e256819d293d0382e2e34 Nov 24 22:46:28 crc kubenswrapper[4801]: I1124 22:46:28.699673 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a929aa0-4981-44fc-a918-1a36ace26c07" path="/var/lib/kubelet/pods/3a929aa0-4981-44fc-a918-1a36ace26c07/volumes" Nov 24 22:46:29 crc kubenswrapper[4801]: I1124 22:46:29.435461 4801 generic.go:334] "Generic (PLEG): container finished" podID="c46589b7-1561-43db-a8a5-11df1de0c789" containerID="fa391b7f3dfd2fd2e3f35bd8f64d9dc7983e5cfd4297c1251d1e7fe5ca478459" exitCode=0 Nov 24 22:46:29 crc kubenswrapper[4801]: I1124 22:46:29.435563 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-2jv67" event={"ID":"c46589b7-1561-43db-a8a5-11df1de0c789","Type":"ContainerDied","Data":"fa391b7f3dfd2fd2e3f35bd8f64d9dc7983e5cfd4297c1251d1e7fe5ca478459"} Nov 24 22:46:29 crc kubenswrapper[4801]: I1124 22:46:29.435843 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/crc-debug-2jv67" event={"ID":"c46589b7-1561-43db-a8a5-11df1de0c789","Type":"ContainerStarted","Data":"94f0b63919671392d98c2bfda8f39a74ea6bfd9cbd4e256819d293d0382e2e34"} Nov 24 22:46:29 crc kubenswrapper[4801]: I1124 22:46:29.498984 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vts7v/crc-debug-2jv67"] Nov 24 22:46:29 crc kubenswrapper[4801]: I1124 22:46:29.515878 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vts7v/crc-debug-2jv67"] Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.601820 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.756927 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tns5q\" (UniqueName: \"kubernetes.io/projected/c46589b7-1561-43db-a8a5-11df1de0c789-kube-api-access-tns5q\") pod \"c46589b7-1561-43db-a8a5-11df1de0c789\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.757162 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c46589b7-1561-43db-a8a5-11df1de0c789-host\") pod \"c46589b7-1561-43db-a8a5-11df1de0c789\" (UID: \"c46589b7-1561-43db-a8a5-11df1de0c789\") " Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.757250 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46589b7-1561-43db-a8a5-11df1de0c789-host" (OuterVolumeSpecName: "host") pod "c46589b7-1561-43db-a8a5-11df1de0c789" (UID: "c46589b7-1561-43db-a8a5-11df1de0c789"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.758060 4801 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c46589b7-1561-43db-a8a5-11df1de0c789-host\") on node \"crc\" DevicePath \"\"" Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.764735 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46589b7-1561-43db-a8a5-11df1de0c789-kube-api-access-tns5q" (OuterVolumeSpecName: "kube-api-access-tns5q") pod "c46589b7-1561-43db-a8a5-11df1de0c789" (UID: "c46589b7-1561-43db-a8a5-11df1de0c789"). InnerVolumeSpecName "kube-api-access-tns5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:46:30 crc kubenswrapper[4801]: I1124 22:46:30.860672 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tns5q\" (UniqueName: \"kubernetes.io/projected/c46589b7-1561-43db-a8a5-11df1de0c789-kube-api-access-tns5q\") on node \"crc\" DevicePath \"\"" Nov 24 22:46:31 crc kubenswrapper[4801]: I1124 22:46:31.462304 4801 scope.go:117] "RemoveContainer" containerID="fa391b7f3dfd2fd2e3f35bd8f64d9dc7983e5cfd4297c1251d1e7fe5ca478459" Nov 24 22:46:31 crc kubenswrapper[4801]: I1124 22:46:31.462404 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/crc-debug-2jv67" Nov 24 22:46:32 crc kubenswrapper[4801]: I1124 22:46:32.677051 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46589b7-1561-43db-a8a5-11df1de0c789" path="/var/lib/kubelet/pods/c46589b7-1561-43db-a8a5-11df1de0c789/volumes" Nov 24 22:46:56 crc kubenswrapper[4801]: I1124 22:46:56.906480 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b/aodh-api/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.128652 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b/aodh-evaluator/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.161730 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b/aodh-listener/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.192586 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0ff7dcad-5c5f-4d7d-88e8-ae80a159f61b/aodh-notifier/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.359647 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fc9f9c96b-sqlnv_baf3533a-d21d-43fc-9d81-1cc5aa48893a/barbican-api/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.387011 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fc9f9c96b-sqlnv_baf3533a-d21d-43fc-9d81-1cc5aa48893a/barbican-api-log/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.467099 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-666f9c9746-pfrcd_920a12a9-4e3f-4425-824d-1cbb1c686f99/barbican-keystone-listener/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.669677 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-666f9c9746-pfrcd_920a12a9-4e3f-4425-824d-1cbb1c686f99/barbican-keystone-listener-log/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.700954 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8664997b87-rmz64_7acdae11-3fc7-4466-987f-5c7360c128c6/barbican-worker/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.704268 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8664997b87-rmz64_7acdae11-3fc7-4466-987f-5c7360c128c6/barbican-worker-log/0.log" Nov 24 22:46:57 crc kubenswrapper[4801]: I1124 22:46:57.912660 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f5tgk_c9a784dc-c9be-4100-8002-063d8a5b5985/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.044681 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_94ec5195-8cde-44c7-8d98-7bf19a4b20e0/ceilometer-central-agent/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.147955 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_94ec5195-8cde-44c7-8d98-7bf19a4b20e0/ceilometer-notification-agent/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.212304 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_94ec5195-8cde-44c7-8d98-7bf19a4b20e0/proxy-httpd/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.258071 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_94ec5195-8cde-44c7-8d98-7bf19a4b20e0/sg-core/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.693446 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f5c15f16-617b-4244-a628-baaf35de4f1f/cinder-api-log/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.722868 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_99d06093-c556-40fc-bec9-47096b4c2aa5/cinder-scheduler/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.782197 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f5c15f16-617b-4244-a628-baaf35de4f1f/cinder-api/0.log" Nov 24 22:46:58 crc kubenswrapper[4801]: I1124 22:46:58.978449 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_99d06093-c556-40fc-bec9-47096b4c2aa5/probe/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.004090 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bl2rb_821b772b-2d36-4166-bfff-48e9f623f3bf/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.211934 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c9n64_f719ed62-4f8d-41fd-b3c5-b94b1c0b3b87/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.247639 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-pbw5k_6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7/init/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.590815 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-pbw5k_6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7/init/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.630790 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tv56m_bbed0308-b82c-4512-8391-a46502977b63/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.697778 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-pbw5k_6c838ec2-ad0b-43d4-b4f1-8c897ce67ff7/dnsmasq-dns/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.831954 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f2f8c36c-e543-4ea8-972a-1c9fe6ba022f/glance-httpd/0.log" Nov 24 22:46:59 crc kubenswrapper[4801]: I1124 22:46:59.914449 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f2f8c36c-e543-4ea8-972a-1c9fe6ba022f/glance-log/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.035879 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_896d841d-ba4e-483a-b586-53227f9a9546/glance-httpd/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.103137 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_896d841d-ba4e-483a-b586-53227f9a9546/glance-log/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.607567 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5f86b8d6b6-l58mw_ea436a56-ba1e-4897-8f60-6811f68c4754/heat-engine/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.841807 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-cd576cf44-85blw_06168c63-0386-4a8b-b1c5-455f0efc4ebf/heat-api/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.874728 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5726k_a7df1790-5911-4056-b880-6140a93203b7/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.937388 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-69677c9fd-fttvq_5bfdd410-6e0a-42f8-8317-eaee1161c428/heat-cfnapi/0.log" Nov 24 22:47:00 crc kubenswrapper[4801]: I1124 22:47:00.983304 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7rgnd_0664fc9e-325e-495e-9a4d-342fdebda59c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:01 crc kubenswrapper[4801]: I1124 22:47:01.169636 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400361-pbxgd_33464b9f-95a5-4b35-90fd-c382cb6899e2/keystone-cron/0.log" Nov 24 22:47:01 crc kubenswrapper[4801]: I1124 22:47:01.439765 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_25c22e81-bd05-43b1-8cc2-7a460fdeb76f/kube-state-metrics/0.log" Nov 24 22:47:01 crc kubenswrapper[4801]: I1124 22:47:01.532742 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c567b6958-cngc5_a6a2894c-3c0d-442a-ab62-31748e315cbe/keystone-api/0.log" Nov 24 22:47:01 crc kubenswrapper[4801]: I1124 22:47:01.533246 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qjjdf_c53246eb-91e4-40f1-b8e6-c76fdade9d9d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:01 crc kubenswrapper[4801]: I1124 22:47:01.642354 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-w7cc5_eb36beab-725c-4fa1-a960-47e85a2554a7/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:01 crc kubenswrapper[4801]: I1124 22:47:01.805239 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_54d96f58-e273-4784-b806-ff37bf6267de/mysqld-exporter/0.log" Nov 24 22:47:02 crc kubenswrapper[4801]: I1124 22:47:02.170203 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgtdt_92764f41-c2a3-479e-a671-81039586b065/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:02 crc kubenswrapper[4801]: I1124 22:47:02.230336 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f8c5d485-f945g_1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43/neutron-httpd/0.log" Nov 24 22:47:02 crc kubenswrapper[4801]: I1124 22:47:02.269918 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f8c5d485-f945g_1a5eaf2e-0d8c-4881-a7ea-cf729ccf0c43/neutron-api/0.log" Nov 24 22:47:02 crc kubenswrapper[4801]: I1124 22:47:02.752794 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e4053b18-2499-4241-a2ad-5a673b22c6ae/nova-cell0-conductor-conductor/0.log" Nov 24 22:47:03 crc kubenswrapper[4801]: I1124 22:47:03.135052 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b25f7f28-8c86-4dcb-8da0-b3ece5f4a7b2/nova-cell1-conductor-conductor/0.log" Nov 24 22:47:03 crc kubenswrapper[4801]: I1124 22:47:03.195007 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_05da65ec-b05b-46ff-886f-e800aae4b6b3/nova-api-log/0.log" Nov 24 22:47:03 crc kubenswrapper[4801]: I1124 22:47:03.342793 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_661955ce-b069-43fc-a5de-8f5aaafd0c6d/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 22:47:03 crc kubenswrapper[4801]: I1124 22:47:03.458792 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cp65d_3335f4ce-4e53-47f6-b241-792b016762da/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:03 crc kubenswrapper[4801]: I1124 22:47:03.590261 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_05da65ec-b05b-46ff-886f-e800aae4b6b3/nova-api-api/0.log" Nov 24 22:47:04 crc kubenswrapper[4801]: I1124 22:47:04.251067 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8fabb604-4b49-4341-8d2b-9d090f472937/nova-metadata-log/0.log" Nov 24 22:47:04 crc kubenswrapper[4801]: I1124 22:47:04.535956 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bd963d5f-9d48-4924-a44c-d3a97a3e6461/mysql-bootstrap/0.log" Nov 24 22:47:04 crc kubenswrapper[4801]: I1124 22:47:04.582205 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_107c2bb1-2ca8-4f37-a2e0-51d928f7a91e/nova-scheduler-scheduler/0.log" Nov 24 22:47:04 crc kubenswrapper[4801]: I1124 22:47:04.681889 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bd963d5f-9d48-4924-a44c-d3a97a3e6461/mysql-bootstrap/0.log" Nov 24 22:47:04 crc kubenswrapper[4801]: I1124 22:47:04.783914 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bd963d5f-9d48-4924-a44c-d3a97a3e6461/galera/0.log" Nov 24 22:47:04 crc kubenswrapper[4801]: I1124 22:47:04.906015 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c6e61e62-b039-4898-b4fa-f20160b67641/mysql-bootstrap/0.log" Nov 24 22:47:05 crc kubenswrapper[4801]: I1124 22:47:05.121680 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c6e61e62-b039-4898-b4fa-f20160b67641/mysql-bootstrap/0.log" Nov 24 22:47:05 crc kubenswrapper[4801]: I1124 22:47:05.155311 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c6e61e62-b039-4898-b4fa-f20160b67641/galera/0.log" Nov 24 22:47:05 crc kubenswrapper[4801]: I1124 22:47:05.331239 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c6be94bd-58b9-45e1-a18b-a85a048a5278/openstackclient/0.log" Nov 24 22:47:05 crc kubenswrapper[4801]: I1124 22:47:05.446652 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2xlzh_b760e8c8-3610-49b7-bfb0-0f8c20b7a042/openstack-network-exporter/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.262265 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sbns4_5520b43a-322d-44eb-87d5-b35af1ad70bc/ovsdb-server-init/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.475444 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sbns4_5520b43a-322d-44eb-87d5-b35af1ad70bc/ovs-vswitchd/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.476730 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sbns4_5520b43a-322d-44eb-87d5-b35af1ad70bc/ovsdb-server-init/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.502120 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sbns4_5520b43a-322d-44eb-87d5-b35af1ad70bc/ovsdb-server/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.509508 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8fabb604-4b49-4341-8d2b-9d090f472937/nova-metadata-metadata/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.742283 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qm4tj_89430698-4742-4f29-93c4-ecd964255e62/ovn-controller/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.759216 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-p9dw7_bf1804da-480b-4cee-8b8e-e25ef5a6e119/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.929261 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3a2dce8d-7a93-4896-b362-5af3371f1916/openstack-network-exporter/0.log" Nov 24 22:47:06 crc kubenswrapper[4801]: I1124 22:47:06.969224 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3a2dce8d-7a93-4896-b362-5af3371f1916/ovn-northd/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.135121 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5ff19f32-8c6d-4785-ac60-ce3d2c7939ad/openstack-network-exporter/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.196222 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5ff19f32-8c6d-4785-ac60-ce3d2c7939ad/ovsdbserver-nb/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.340137 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2b1d77f1-fc9a-46ea-8fd8-629f36a3b659/openstack-network-exporter/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.393150 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2b1d77f1-fc9a-46ea-8fd8-629f36a3b659/ovsdbserver-sb/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.563420 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66f87bb5dd-vfs99_142f97c2-64f1-455a-bd07-ed9d5f9ab466/placement-api/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.712188 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66f87bb5dd-vfs99_142f97c2-64f1-455a-bd07-ed9d5f9ab466/placement-log/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.735064 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e4b29e97-8d63-4b0d-8df1-9e832ecfaf17/init-config-reloader/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.925169 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e4b29e97-8d63-4b0d-8df1-9e832ecfaf17/prometheus/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.937464 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e4b29e97-8d63-4b0d-8df1-9e832ecfaf17/init-config-reloader/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.939055 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e4b29e97-8d63-4b0d-8df1-9e832ecfaf17/config-reloader/0.log" Nov 24 22:47:07 crc kubenswrapper[4801]: I1124 22:47:07.959895 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e4b29e97-8d63-4b0d-8df1-9e832ecfaf17/thanos-sidecar/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.145765 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a/setup-container/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.313348 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a/setup-container/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.377564 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e50000d1-be7d-4dea-86dc-d616dda527b7/setup-container/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.390711 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bb53adc7-d56d-4fd7-b9ca-1070c5bc5d9a/rabbitmq/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.598721 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e50000d1-be7d-4dea-86dc-d616dda527b7/setup-container/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.659647 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e50000d1-be7d-4dea-86dc-d616dda527b7/rabbitmq/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.713292 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb/setup-container/0.log" Nov 24 22:47:08 crc kubenswrapper[4801]: I1124 22:47:08.959480 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb/setup-container/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.015696 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6bb98ae2-f7aa-4f46-86d0-9134dfd9c5cb/rabbitmq/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.067243 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_1881365b-14f3-4392-930e-8d054a993b96/setup-container/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.237966 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_1881365b-14f3-4392-930e-8d054a993b96/setup-container/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.294210 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-56w25_c79e580c-8efa-4619-b727-4d24b3c7435f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.336018 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_1881365b-14f3-4392-930e-8d054a993b96/rabbitmq/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.535779 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fqw4d_806070c2-2599-47c1-9a86-bd078d5cc939/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.626462 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-j6rkx_be732949-920b-4e0c-ac7e-773a6983a64b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.775070 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-cm7vh_520f6efa-a70d-4702-83b8-0c621dfd3a8a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:09 crc kubenswrapper[4801]: I1124 22:47:09.891710 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-l5gnv_35523e2b-7056-4e9c-b019-cb8cb6a8490b/ssh-known-hosts-edpm-deployment/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.171439 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77d858bcc9-bwkzb_44f65da7-819a-43dd-9267-2b30cffff0f2/proxy-server/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.279320 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-rbr7q_4add1738-d33e-4fc5-aaaf-ae28dcd88220/swift-ring-rebalance/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.339158 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77d858bcc9-bwkzb_44f65da7-819a-43dd-9267-2b30cffff0f2/proxy-httpd/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.413225 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/account-auditor/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.526726 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/account-reaper/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.613743 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/account-replicator/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.662994 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/account-server/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.735165 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/container-auditor/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.849240 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/container-server/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.877157 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/container-replicator/0.log" Nov 24 22:47:10 crc kubenswrapper[4801]: I1124 22:47:10.918541 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/container-updater/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.009767 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/object-auditor/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.066844 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/object-expirer/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.146870 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/object-server/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.231128 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/object-replicator/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.239421 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/object-updater/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.305511 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/rsync/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.370493 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ba1dd9d3-072d-4cc1-b164-9701cb421564/swift-recon-cron/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.561321 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wbngh_ee17e4b7-fd24-4e86-8aa5-fe62a55c58fd/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:11 crc kubenswrapper[4801]: I1124 22:47:11.755763 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-tgqlf_0e8bfdd8-62df-4ab7-b82b-8cd737bf3beb/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:12 crc kubenswrapper[4801]: I1124 22:47:12.194278 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7fc00445-cb02-4dc2-b308-df4d2696c0ad/test-operator-logs-container/0.log" Nov 24 22:47:12 crc kubenswrapper[4801]: I1124 22:47:12.298255 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nwvvx_0a88366c-083c-452a-b097-2087766abeb3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 22:47:12 crc kubenswrapper[4801]: I1124 22:47:12.834511 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0b7936e0-8a45-4c32-a7a9-8323443c4274/tempest-tests-tempest-tests-runner/0.log" Nov 24 22:47:18 crc kubenswrapper[4801]: I1124 22:47:18.463329 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9a25adc5-f2a3-44b0-aeb6-8a45707600fa/memcached/0.log" Nov 24 22:47:42 crc kubenswrapper[4801]: I1124 22:47:42.768285 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/util/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.003332 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/pull/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.003850 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/pull/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.035058 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/util/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.184459 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/util/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.193406 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/extract/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.237115 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9b547be2eac4c3e8f15e2800f340fb44715065f09b75b108844ab9dee6zx66c_972dff04-e157-4f58-b501-f8a02504fb0f/pull/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.395313 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-nf9rx_a4c2438a-8323-4042-a1c2-2db0fb3fd096/kube-rbac-proxy/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.468205 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8bd5p_c652c759-8522-4638-a5e5-dcdcb965fa66/kube-rbac-proxy/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.477822 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-nf9rx_a4c2438a-8323-4042-a1c2-2db0fb3fd096/manager/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.600256 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8bd5p_c652c759-8522-4638-a5e5-dcdcb965fa66/manager/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.666765 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-zvrq5_e0f6e49b-c86a-4d0f-b5fd-7c28c0859544/kube-rbac-proxy/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.667927 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-zvrq5_e0f6e49b-c86a-4d0f-b5fd-7c28c0859544/manager/0.log" Nov 24 22:47:43 crc kubenswrapper[4801]: I1124 22:47:43.843741 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-q4x69_57e84789-6801-4b26-9014-6735327f3559/kube-rbac-proxy/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.038909 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-q4x69_57e84789-6801-4b26-9014-6735327f3559/manager/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.114754 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-mvmfd_fab8c86f-5335-48e2-8272-fd4c04a1f28c/kube-rbac-proxy/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.195973 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-mvmfd_fab8c86f-5335-48e2-8272-fd4c04a1f28c/manager/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.268191 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-xw4kb_aa9b27bf-234e-4116-8adf-68094684f237/kube-rbac-proxy/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.376704 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-xw4kb_aa9b27bf-234e-4116-8adf-68094684f237/manager/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.484462 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-w8j9x_776dffbf-70bd-40b3-a88d-241ca0870179/kube-rbac-proxy/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.642778 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-884lt_a5c61999-f3db-4d45-bb33-8b25d09cb675/kube-rbac-proxy/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.654465 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-w8j9x_776dffbf-70bd-40b3-a88d-241ca0870179/manager/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.782650 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-884lt_a5c61999-f3db-4d45-bb33-8b25d09cb675/manager/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.915889 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-mzknb_fa7dcf85-ac60-4a43-beef-c92e1a597e4b/kube-rbac-proxy/0.log" Nov 24 22:47:44 crc kubenswrapper[4801]: I1124 22:47:44.994324 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-mzknb_fa7dcf85-ac60-4a43-beef-c92e1a597e4b/manager/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.147591 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-kdtqx_6072847d-a06e-4642-a120-d89098e76619/manager/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.152018 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-kdtqx_6072847d-a06e-4642-a120-d89098e76619/kube-rbac-proxy/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.313156 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-97z5m_57efd675-3fd7-4f61-bff4-f47645a37c1d/kube-rbac-proxy/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.367166 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-97z5m_57efd675-3fd7-4f61-bff4-f47645a37c1d/manager/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.405901 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-jmtpv_16518a90-b12a-402b-982b-7649945e5d7b/kube-rbac-proxy/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.554008 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-jmtpv_16518a90-b12a-402b-982b-7649945e5d7b/manager/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.643198 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-z89h9_9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433/kube-rbac-proxy/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.709514 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-z89h9_9520ce1a-a9f6-49a7-b1f7-b5dc0b1c2433/manager/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.771477 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-jpl9z_94a2b03f-55b0-4cae-b8f9-53babac8e9e4/kube-rbac-proxy/0.log" Nov 24 22:47:45 crc kubenswrapper[4801]: I1124 22:47:45.936678 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-jpl9z_94a2b03f-55b0-4cae-b8f9-53babac8e9e4/manager/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.000491 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-84qdp_03fe9036-562f-47e8-94c6-c64f1e289895/kube-rbac-proxy/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.049819 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-84qdp_03fe9036-562f-47e8-94c6-c64f1e289895/manager/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.462646 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c4c94676-k9g5k_8ca4a00c-ea8a-438c-a2ac-77a3f04a3471/operator/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.508347 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w2fqv_d5b5e147-92f8-4f47-995b-5e1710b5e593/registry-server/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.723268 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-7wgw6_8eaba32d-1c83-4c67-8202-329ed133d882/kube-rbac-proxy/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.827010 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-n4wlr_b1de0d3c-119a-447e-aa94-63b0fcf992fa/kube-rbac-proxy/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.848320 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-7wgw6_8eaba32d-1c83-4c67-8202-329ed133d882/manager/0.log" Nov 24 22:47:46 crc kubenswrapper[4801]: I1124 22:47:46.992681 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-n4wlr_b1de0d3c-119a-447e-aa94-63b0fcf992fa/manager/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.073011 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9skrc_45ac145c-4f11-43ca-81c3-7b56c357ce5d/operator/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.234437 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-jhrrw_c8b5803e-6b9e-430b-a809-bd51c7ce77c8/kube-rbac-proxy/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.354733 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-jhrrw_c8b5803e-6b9e-430b-a809-bd51c7ce77c8/manager/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.440065 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5c7cd5746d-lcfhv_82b04169-cd4a-4658-b875-88b342622816/kube-rbac-proxy/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.463276 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fcf4778d9-sfg5s_d878fee2-936b-4264-938e-3d7997ec2c7d/manager/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.609883 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wjm9k_f8658668-cf48-4465-b119-dcc386aea963/kube-rbac-proxy/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.757440 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wjm9k_f8658668-cf48-4465-b119-dcc386aea963/manager/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.779122 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5c7cd5746d-lcfhv_82b04169-cd4a-4658-b875-88b342622816/manager/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.855215 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-4lmlj_0e5bc499-0e37-444d-9341-0f30dd8aaf4b/manager/0.log" Nov 24 22:47:47 crc kubenswrapper[4801]: I1124 22:47:47.856182 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-4lmlj_0e5bc499-0e37-444d-9341-0f30dd8aaf4b/kube-rbac-proxy/0.log" Nov 24 22:48:09 crc kubenswrapper[4801]: I1124 22:48:09.931031 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mnp5m_360559c7-6d20-4c26-9cfd-3c82af2df553/control-plane-machine-set-operator/0.log" Nov 24 22:48:10 crc kubenswrapper[4801]: I1124 22:48:10.134135 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sp8cw_f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c/kube-rbac-proxy/0.log" Nov 24 22:48:10 crc kubenswrapper[4801]: I1124 22:48:10.168702 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sp8cw_f2f6b3ce-23ec-4b6a-a5be-69eb71bac39c/machine-api-operator/0.log" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.828150 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n66m4"] Nov 24 22:48:19 crc kubenswrapper[4801]: E1124 22:48:19.829608 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46589b7-1561-43db-a8a5-11df1de0c789" containerName="container-00" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.829631 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46589b7-1561-43db-a8a5-11df1de0c789" containerName="container-00" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.829944 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46589b7-1561-43db-a8a5-11df1de0c789" containerName="container-00" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.832265 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.845662 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n66m4"] Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.937432 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-utilities\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.937504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-catalog-content\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:19 crc kubenswrapper[4801]: I1124 22:48:19.937546 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tnbp\" (UniqueName: \"kubernetes.io/projected/e88b2af3-38c8-4a81-bd55-e13f16029539-kube-api-access-4tnbp\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.040412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-utilities\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.040486 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-catalog-content\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.040522 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tnbp\" (UniqueName: \"kubernetes.io/projected/e88b2af3-38c8-4a81-bd55-e13f16029539-kube-api-access-4tnbp\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.041042 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-catalog-content\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.041110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-utilities\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.062746 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tnbp\" (UniqueName: \"kubernetes.io/projected/e88b2af3-38c8-4a81-bd55-e13f16029539-kube-api-access-4tnbp\") pod \"redhat-operators-n66m4\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.156328 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:20 crc kubenswrapper[4801]: I1124 22:48:20.875214 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n66m4"] Nov 24 22:48:21 crc kubenswrapper[4801]: I1124 22:48:21.784750 4801 generic.go:334] "Generic (PLEG): container finished" podID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerID="f45a4903e9038a9977be6bf0de7e28f7db95db9ba046d7299781b44e089b15a8" exitCode=0 Nov 24 22:48:21 crc kubenswrapper[4801]: I1124 22:48:21.784806 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerDied","Data":"f45a4903e9038a9977be6bf0de7e28f7db95db9ba046d7299781b44e089b15a8"} Nov 24 22:48:21 crc kubenswrapper[4801]: I1124 22:48:21.785200 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerStarted","Data":"d375fe7b56bc6d0179f09807f923c2cf5a105ed012d2d7fdd23f7d0a612410b8"} Nov 24 22:48:22 crc kubenswrapper[4801]: I1124 22:48:22.802105 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerStarted","Data":"ec281abb7b01f315714a34084bebefcce495cd7b4b1829991b40e21c997925c3"} Nov 24 22:48:24 crc kubenswrapper[4801]: I1124 22:48:24.320035 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:48:24 crc kubenswrapper[4801]: I1124 22:48:24.320809 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:48:25 crc kubenswrapper[4801]: I1124 22:48:25.574591 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hdx96_abd371b6-063d-4063-a4e1-0ca1e9253b4c/cert-manager-controller/0.log" Nov 24 22:48:25 crc kubenswrapper[4801]: I1124 22:48:25.755100 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-84xwb_0e545ad2-65e7-4dd5-81ec-6c5726d5df36/cert-manager-cainjector/0.log" Nov 24 22:48:25 crc kubenswrapper[4801]: I1124 22:48:25.785392 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-g2kzb_2d939e62-3c98-4d65-9da2-04d29b510399/cert-manager-webhook/0.log" Nov 24 22:48:26 crc kubenswrapper[4801]: I1124 22:48:26.843501 4801 generic.go:334] "Generic (PLEG): container finished" podID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerID="ec281abb7b01f315714a34084bebefcce495cd7b4b1829991b40e21c997925c3" exitCode=0 Nov 24 22:48:26 crc kubenswrapper[4801]: I1124 22:48:26.843585 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerDied","Data":"ec281abb7b01f315714a34084bebefcce495cd7b4b1829991b40e21c997925c3"} Nov 24 22:48:27 crc kubenswrapper[4801]: I1124 22:48:27.858516 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerStarted","Data":"e6132ed2fd9863542690470fc4169c017ef69d480383587033589ae5f7b03fba"} Nov 24 22:48:30 crc kubenswrapper[4801]: I1124 22:48:30.157417 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:30 crc kubenswrapper[4801]: I1124 22:48:30.157948 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:32 crc kubenswrapper[4801]: I1124 22:48:32.070431 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n66m4" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="registry-server" probeResult="failure" output=< Nov 24 22:48:32 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:48:32 crc kubenswrapper[4801]: > Nov 24 22:48:41 crc kubenswrapper[4801]: I1124 22:48:41.220357 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n66m4" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="registry-server" probeResult="failure" output=< Nov 24 22:48:41 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Nov 24 22:48:41 crc kubenswrapper[4801]: > Nov 24 22:48:42 crc kubenswrapper[4801]: I1124 22:48:42.051294 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-hrx9b_5944a8e8-b2b1-4b07-8009-daa3215b3612/nmstate-console-plugin/0.log" Nov 24 22:48:42 crc kubenswrapper[4801]: I1124 22:48:42.238546 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-db4b2_5d6ad10f-790a-49ed-aff0-13a1bd01c476/nmstate-handler/0.log" Nov 24 22:48:42 crc kubenswrapper[4801]: I1124 22:48:42.271840 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-xxkcq_22bf0a52-c439-4e4e-95cf-f34685cc8185/kube-rbac-proxy/0.log" Nov 24 22:48:42 crc kubenswrapper[4801]: I1124 22:48:42.284712 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-xxkcq_22bf0a52-c439-4e4e-95cf-f34685cc8185/nmstate-metrics/0.log" Nov 24 22:48:42 crc kubenswrapper[4801]: I1124 22:48:42.493760 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-4m2h8_26c45212-f2f9-452d-adb5-64ab0fd1448b/nmstate-webhook/0.log" Nov 24 22:48:42 crc kubenswrapper[4801]: I1124 22:48:42.501753 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-nqqlk_c739d740-88a4-4d6d-875d-77434b46042b/nmstate-operator/0.log" Nov 24 22:48:50 crc kubenswrapper[4801]: I1124 22:48:50.233494 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:50 crc kubenswrapper[4801]: I1124 22:48:50.273971 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n66m4" podStartSLOduration=25.741075633 podStartE2EDuration="31.273940785s" podCreationTimestamp="2025-11-24 22:48:19 +0000 UTC" firstStartedPulling="2025-11-24 22:48:21.787704296 +0000 UTC m=+6073.870290966" lastFinishedPulling="2025-11-24 22:48:27.320569448 +0000 UTC m=+6079.403156118" observedRunningTime="2025-11-24 22:48:27.879356819 +0000 UTC m=+6079.961943489" watchObservedRunningTime="2025-11-24 22:48:50.273940785 +0000 UTC m=+6102.356527485" Nov 24 22:48:50 crc kubenswrapper[4801]: I1124 22:48:50.297236 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:51 crc kubenswrapper[4801]: I1124 22:48:51.020349 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n66m4"] Nov 24 22:48:52 crc kubenswrapper[4801]: I1124 22:48:52.169603 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n66m4" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="registry-server" containerID="cri-o://e6132ed2fd9863542690470fc4169c017ef69d480383587033589ae5f7b03fba" gracePeriod=2 Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.184963 4801 generic.go:334] "Generic (PLEG): container finished" podID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerID="e6132ed2fd9863542690470fc4169c017ef69d480383587033589ae5f7b03fba" exitCode=0 Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.185056 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerDied","Data":"e6132ed2fd9863542690470fc4169c017ef69d480383587033589ae5f7b03fba"} Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.879997 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.936124 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tnbp\" (UniqueName: \"kubernetes.io/projected/e88b2af3-38c8-4a81-bd55-e13f16029539-kube-api-access-4tnbp\") pod \"e88b2af3-38c8-4a81-bd55-e13f16029539\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.936199 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-catalog-content\") pod \"e88b2af3-38c8-4a81-bd55-e13f16029539\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.936604 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-utilities\") pod \"e88b2af3-38c8-4a81-bd55-e13f16029539\" (UID: \"e88b2af3-38c8-4a81-bd55-e13f16029539\") " Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.938192 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-utilities" (OuterVolumeSpecName: "utilities") pod "e88b2af3-38c8-4a81-bd55-e13f16029539" (UID: "e88b2af3-38c8-4a81-bd55-e13f16029539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:48:53 crc kubenswrapper[4801]: I1124 22:48:53.964661 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88b2af3-38c8-4a81-bd55-e13f16029539-kube-api-access-4tnbp" (OuterVolumeSpecName: "kube-api-access-4tnbp") pod "e88b2af3-38c8-4a81-bd55-e13f16029539" (UID: "e88b2af3-38c8-4a81-bd55-e13f16029539"). InnerVolumeSpecName "kube-api-access-4tnbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.041213 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.041262 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tnbp\" (UniqueName: \"kubernetes.io/projected/e88b2af3-38c8-4a81-bd55-e13f16029539-kube-api-access-4tnbp\") on node \"crc\" DevicePath \"\"" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.053894 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e88b2af3-38c8-4a81-bd55-e13f16029539" (UID: "e88b2af3-38c8-4a81-bd55-e13f16029539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.143463 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88b2af3-38c8-4a81-bd55-e13f16029539-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.212446 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n66m4" event={"ID":"e88b2af3-38c8-4a81-bd55-e13f16029539","Type":"ContainerDied","Data":"d375fe7b56bc6d0179f09807f923c2cf5a105ed012d2d7fdd23f7d0a612410b8"} Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.212507 4801 scope.go:117] "RemoveContainer" containerID="e6132ed2fd9863542690470fc4169c017ef69d480383587033589ae5f7b03fba" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.212547 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n66m4" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.263392 4801 scope.go:117] "RemoveContainer" containerID="ec281abb7b01f315714a34084bebefcce495cd7b4b1829991b40e21c997925c3" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.291431 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n66m4"] Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.299926 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n66m4"] Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.330563 4801 scope.go:117] "RemoveContainer" containerID="f45a4903e9038a9977be6bf0de7e28f7db95db9ba046d7299781b44e089b15a8" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.333882 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.333922 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:48:54 crc kubenswrapper[4801]: I1124 22:48:54.677224 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" path="/var/lib/kubelet/pods/e88b2af3-38c8-4a81-bd55-e13f16029539/volumes" Nov 24 22:48:58 crc kubenswrapper[4801]: I1124 22:48:58.243477 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c79fb6df8-dkhwm_bc08462b-4c52-4a4c-8e0d-30446d2b9a57/kube-rbac-proxy/0.log" Nov 24 22:48:58 crc kubenswrapper[4801]: I1124 22:48:58.262681 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c79fb6df8-dkhwm_bc08462b-4c52-4a4c-8e0d-30446d2b9a57/manager/0.log" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.367828 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lwqw5"] Nov 24 22:49:09 crc kubenswrapper[4801]: E1124 22:49:09.368962 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="extract-utilities" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.368979 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="extract-utilities" Nov 24 22:49:09 crc kubenswrapper[4801]: E1124 22:49:09.369011 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="registry-server" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.369021 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="registry-server" Nov 24 22:49:09 crc kubenswrapper[4801]: E1124 22:49:09.369042 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="extract-content" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.369050 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="extract-content" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.369324 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88b2af3-38c8-4a81-bd55-e13f16029539" containerName="registry-server" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.371287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.376686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-utilities\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.376744 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c42r\" (UniqueName: \"kubernetes.io/projected/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-kube-api-access-5c42r\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.376915 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-catalog-content\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.381158 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwqw5"] Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.480333 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-utilities\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.480670 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c42r\" (UniqueName: \"kubernetes.io/projected/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-kube-api-access-5c42r\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.480734 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-catalog-content\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.481318 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-catalog-content\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.481589 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-utilities\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.509879 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c42r\" (UniqueName: \"kubernetes.io/projected/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-kube-api-access-5c42r\") pod \"redhat-marketplace-lwqw5\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:09 crc kubenswrapper[4801]: I1124 22:49:09.709197 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:10 crc kubenswrapper[4801]: I1124 22:49:10.215828 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwqw5"] Nov 24 22:49:10 crc kubenswrapper[4801]: I1124 22:49:10.426862 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwqw5" event={"ID":"773b10d5-6c3c-4a32-9e2a-5f643bec22a3","Type":"ContainerStarted","Data":"f38cca1ee39d0fdc60b27a0b81dced92e2f23a319ca6fa52f93f619272192d87"} Nov 24 22:49:11 crc kubenswrapper[4801]: I1124 22:49:11.443561 4801 generic.go:334] "Generic (PLEG): container finished" podID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerID="2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2" exitCode=0 Nov 24 22:49:11 crc kubenswrapper[4801]: I1124 22:49:11.443639 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwqw5" event={"ID":"773b10d5-6c3c-4a32-9e2a-5f643bec22a3","Type":"ContainerDied","Data":"2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2"} Nov 24 22:49:11 crc kubenswrapper[4801]: I1124 22:49:11.448204 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:49:13 crc kubenswrapper[4801]: I1124 22:49:13.475168 4801 generic.go:334] "Generic (PLEG): container finished" podID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerID="1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04" exitCode=0 Nov 24 22:49:13 crc kubenswrapper[4801]: I1124 22:49:13.475760 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwqw5" event={"ID":"773b10d5-6c3c-4a32-9e2a-5f643bec22a3","Type":"ContainerDied","Data":"1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04"} Nov 24 22:49:14 crc kubenswrapper[4801]: I1124 22:49:14.487122 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwqw5" event={"ID":"773b10d5-6c3c-4a32-9e2a-5f643bec22a3","Type":"ContainerStarted","Data":"0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6"} Nov 24 22:49:14 crc kubenswrapper[4801]: I1124 22:49:14.503815 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lwqw5" podStartSLOduration=3.007394465 podStartE2EDuration="5.503791013s" podCreationTimestamp="2025-11-24 22:49:09 +0000 UTC" firstStartedPulling="2025-11-24 22:49:11.447437949 +0000 UTC m=+6123.530024619" lastFinishedPulling="2025-11-24 22:49:13.943834477 +0000 UTC m=+6126.026421167" observedRunningTime="2025-11-24 22:49:14.503536695 +0000 UTC m=+6126.586123365" watchObservedRunningTime="2025-11-24 22:49:14.503791013 +0000 UTC m=+6126.586377673" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.218335 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-kvjtv_cc709992-f0d1-4d54-abcc-06c28f330196/cluster-logging-operator/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.281649 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-hd6fb_1128bf46-e782-4916-8554-b9e6e6ed28f5/collector/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.442696 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_f4e8c367-ba45-416c-ae57-0a143fa11854/loki-compactor/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.507565 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-xp22s_08034d9d-5888-426f-9a8c-137de45fef21/loki-distributor/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.660318 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-57b544c797-dvfrt_145468bd-4c70-49ef-a013-6cc672232c5e/opa/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.660555 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-57b544c797-dvfrt_145468bd-4c70-49ef-a013-6cc672232c5e/gateway/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.867956 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-57b544c797-tk7zw_3ac04356-36c8-4670-b849-cacb649d9a9a/opa/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.889743 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-57b544c797-tk7zw_3ac04356-36c8-4670-b849-cacb649d9a9a/gateway/0.log" Nov 24 22:49:16 crc kubenswrapper[4801]: I1124 22:49:16.913972 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_597f607f-f6da-4ba2-84e4-fd9cab0b313d/loki-index-gateway/0.log" Nov 24 22:49:17 crc kubenswrapper[4801]: I1124 22:49:17.123022 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_36d32422-038a-496e-9b7f-8616b90efb2b/loki-ingester/0.log" Nov 24 22:49:17 crc kubenswrapper[4801]: I1124 22:49:17.132310 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-sldkd_0196a92d-1cc8-4f72-922e-692ca28b2d88/loki-querier/0.log" Nov 24 22:49:18 crc kubenswrapper[4801]: I1124 22:49:18.135495 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-gw76f_f3696f2a-d943-47e9-b634-49e7293e64db/loki-query-frontend/0.log" Nov 24 22:49:19 crc kubenswrapper[4801]: I1124 22:49:19.710126 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:19 crc kubenswrapper[4801]: I1124 22:49:19.710478 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:19 crc kubenswrapper[4801]: I1124 22:49:19.770527 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:20 crc kubenswrapper[4801]: I1124 22:49:20.598496 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:21 crc kubenswrapper[4801]: I1124 22:49:21.009727 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwqw5"] Nov 24 22:49:22 crc kubenswrapper[4801]: I1124 22:49:22.574713 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lwqw5" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="registry-server" containerID="cri-o://0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6" gracePeriod=2 Nov 24 22:49:22 crc kubenswrapper[4801]: E1124 22:49:22.853047 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773b10d5_6c3c_4a32_9e2a_5f643bec22a3.slice/crio-0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.146413 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.169490 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-utilities\") pod \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.169581 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-catalog-content\") pod \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.169725 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c42r\" (UniqueName: \"kubernetes.io/projected/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-kube-api-access-5c42r\") pod \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\" (UID: \"773b10d5-6c3c-4a32-9e2a-5f643bec22a3\") " Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.170707 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-utilities" (OuterVolumeSpecName: "utilities") pod "773b10d5-6c3c-4a32-9e2a-5f643bec22a3" (UID: "773b10d5-6c3c-4a32-9e2a-5f643bec22a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.171323 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.177640 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-kube-api-access-5c42r" (OuterVolumeSpecName: "kube-api-access-5c42r") pod "773b10d5-6c3c-4a32-9e2a-5f643bec22a3" (UID: "773b10d5-6c3c-4a32-9e2a-5f643bec22a3"). InnerVolumeSpecName "kube-api-access-5c42r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.201878 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "773b10d5-6c3c-4a32-9e2a-5f643bec22a3" (UID: "773b10d5-6c3c-4a32-9e2a-5f643bec22a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.274778 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.274824 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c42r\" (UniqueName: \"kubernetes.io/projected/773b10d5-6c3c-4a32-9e2a-5f643bec22a3-kube-api-access-5c42r\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.589403 4801 generic.go:334] "Generic (PLEG): container finished" podID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerID="0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6" exitCode=0 Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.589477 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwqw5" event={"ID":"773b10d5-6c3c-4a32-9e2a-5f643bec22a3","Type":"ContainerDied","Data":"0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6"} Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.589707 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwqw5" event={"ID":"773b10d5-6c3c-4a32-9e2a-5f643bec22a3","Type":"ContainerDied","Data":"f38cca1ee39d0fdc60b27a0b81dced92e2f23a319ca6fa52f93f619272192d87"} Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.589734 4801 scope.go:117] "RemoveContainer" containerID="0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.589521 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwqw5" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.620288 4801 scope.go:117] "RemoveContainer" containerID="1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.629487 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwqw5"] Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.646469 4801 scope.go:117] "RemoveContainer" containerID="2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.654553 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwqw5"] Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.699778 4801 scope.go:117] "RemoveContainer" containerID="0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6" Nov 24 22:49:23 crc kubenswrapper[4801]: E1124 22:49:23.700661 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6\": container with ID starting with 0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6 not found: ID does not exist" containerID="0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.700924 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6"} err="failed to get container status \"0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6\": rpc error: code = NotFound desc = could not find container \"0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6\": container with ID starting with 0ed854671de3f3278890c925270940101c73a861f72de1040401570d8d7c2cd6 not found: ID does not exist" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.700956 4801 scope.go:117] "RemoveContainer" containerID="1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04" Nov 24 22:49:23 crc kubenswrapper[4801]: E1124 22:49:23.701576 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04\": container with ID starting with 1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04 not found: ID does not exist" containerID="1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.701610 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04"} err="failed to get container status \"1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04\": rpc error: code = NotFound desc = could not find container \"1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04\": container with ID starting with 1b0607ef2fc2311e60cc70385c3eefa8b528879979500b6aebfd76014fc0fa04 not found: ID does not exist" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.701629 4801 scope.go:117] "RemoveContainer" containerID="2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2" Nov 24 22:49:23 crc kubenswrapper[4801]: E1124 22:49:23.702061 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2\": container with ID starting with 2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2 not found: ID does not exist" containerID="2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2" Nov 24 22:49:23 crc kubenswrapper[4801]: I1124 22:49:23.702113 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2"} err="failed to get container status \"2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2\": rpc error: code = NotFound desc = could not find container \"2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2\": container with ID starting with 2800368d507c46266cfbcfe3615d20bb54e72998dcb0069ccd554364730b2cc2 not found: ID does not exist" Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.319925 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.320012 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.320073 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.321435 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0317fc26fc78a80a41a408957484b54894b82c79a2597d92216148bda8ae01ac"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.321534 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://0317fc26fc78a80a41a408957484b54894b82c79a2597d92216148bda8ae01ac" gracePeriod=600 Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.605917 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="0317fc26fc78a80a41a408957484b54894b82c79a2597d92216148bda8ae01ac" exitCode=0 Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.606494 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"0317fc26fc78a80a41a408957484b54894b82c79a2597d92216148bda8ae01ac"} Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.606882 4801 scope.go:117] "RemoveContainer" containerID="3cecf5c04e0b272c55e992716e56cbf190bfd91b99a55c61d676833aedfe061e" Nov 24 22:49:24 crc kubenswrapper[4801]: I1124 22:49:24.687551 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" path="/var/lib/kubelet/pods/773b10d5-6c3c-4a32-9e2a-5f643bec22a3/volumes" Nov 24 22:49:25 crc kubenswrapper[4801]: I1124 22:49:25.620996 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf"} Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.432451 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-whn8b"] Nov 24 22:49:28 crc kubenswrapper[4801]: E1124 22:49:28.433563 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="extract-content" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.433579 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="extract-content" Nov 24 22:49:28 crc kubenswrapper[4801]: E1124 22:49:28.433615 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="registry-server" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.433621 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="registry-server" Nov 24 22:49:28 crc kubenswrapper[4801]: E1124 22:49:28.433654 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="extract-utilities" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.433661 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="extract-utilities" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.433983 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="773b10d5-6c3c-4a32-9e2a-5f643bec22a3" containerName="registry-server" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.435775 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.457048 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whn8b"] Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.529517 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxf6n\" (UniqueName: \"kubernetes.io/projected/1987af30-5eee-49b7-8583-28706d42b22f-kube-api-access-fxf6n\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.529622 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-utilities\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.529700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-catalog-content\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.632028 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-utilities\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.632169 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-catalog-content\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.632348 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxf6n\" (UniqueName: \"kubernetes.io/projected/1987af30-5eee-49b7-8583-28706d42b22f-kube-api-access-fxf6n\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.632674 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-utilities\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:28 crc kubenswrapper[4801]: I1124 22:49:28.633070 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-catalog-content\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:29 crc kubenswrapper[4801]: I1124 22:49:29.036666 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxf6n\" (UniqueName: \"kubernetes.io/projected/1987af30-5eee-49b7-8583-28706d42b22f-kube-api-access-fxf6n\") pod \"certified-operators-whn8b\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:29 crc kubenswrapper[4801]: I1124 22:49:29.058070 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:29 crc kubenswrapper[4801]: I1124 22:49:29.668644 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whn8b"] Nov 24 22:49:30 crc kubenswrapper[4801]: I1124 22:49:30.702575 4801 generic.go:334] "Generic (PLEG): container finished" podID="1987af30-5eee-49b7-8583-28706d42b22f" containerID="8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63" exitCode=0 Nov 24 22:49:30 crc kubenswrapper[4801]: I1124 22:49:30.704698 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerDied","Data":"8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63"} Nov 24 22:49:30 crc kubenswrapper[4801]: I1124 22:49:30.704747 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerStarted","Data":"4ef7986ece1241c661152fb907c604606a702ae262ff593a664c7d8b2458e335"} Nov 24 22:49:31 crc kubenswrapper[4801]: I1124 22:49:31.719232 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerStarted","Data":"68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d"} Nov 24 22:49:32 crc kubenswrapper[4801]: I1124 22:49:32.730775 4801 generic.go:334] "Generic (PLEG): container finished" podID="1987af30-5eee-49b7-8583-28706d42b22f" containerID="68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d" exitCode=0 Nov 24 22:49:32 crc kubenswrapper[4801]: I1124 22:49:32.730960 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerDied","Data":"68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d"} Nov 24 22:49:33 crc kubenswrapper[4801]: I1124 22:49:33.755243 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerStarted","Data":"307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208"} Nov 24 22:49:33 crc kubenswrapper[4801]: I1124 22:49:33.806205 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-whn8b" podStartSLOduration=3.271404905 podStartE2EDuration="5.806166513s" podCreationTimestamp="2025-11-24 22:49:28 +0000 UTC" firstStartedPulling="2025-11-24 22:49:30.705716321 +0000 UTC m=+6142.788303001" lastFinishedPulling="2025-11-24 22:49:33.240477939 +0000 UTC m=+6145.323064609" observedRunningTime="2025-11-24 22:49:33.774043206 +0000 UTC m=+6145.856629876" watchObservedRunningTime="2025-11-24 22:49:33.806166513 +0000 UTC m=+6145.888753183" Nov 24 22:49:34 crc kubenswrapper[4801]: I1124 22:49:34.447933 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-gvfsb_f522ffc9-fe47-42ac-bc8e-2fdba3fbd404/kube-rbac-proxy/0.log" Nov 24 22:49:34 crc kubenswrapper[4801]: I1124 22:49:34.662003 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-gvfsb_f522ffc9-fe47-42ac-bc8e-2fdba3fbd404/controller/0.log" Nov 24 22:49:34 crc kubenswrapper[4801]: I1124 22:49:34.790660 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-8slp7_d5953034-4614-44bc-8d2a-5f2a2a6b37d0/frr-k8s-webhook-server/0.log" Nov 24 22:49:34 crc kubenswrapper[4801]: I1124 22:49:34.887728 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-frr-files/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.111738 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-reloader/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.112169 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-frr-files/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.128820 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-reloader/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.140350 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-metrics/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.474054 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-reloader/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.479642 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-metrics/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.486927 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-frr-files/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.531162 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-metrics/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.776371 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-reloader/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.776450 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-frr-files/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.841610 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/controller/0.log" Nov 24 22:49:35 crc kubenswrapper[4801]: I1124 22:49:35.850332 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/cp-metrics/0.log" Nov 24 22:49:36 crc kubenswrapper[4801]: I1124 22:49:36.217335 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/frr-metrics/0.log" Nov 24 22:49:36 crc kubenswrapper[4801]: I1124 22:49:36.217391 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/kube-rbac-proxy/0.log" Nov 24 22:49:36 crc kubenswrapper[4801]: I1124 22:49:36.332863 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/kube-rbac-proxy-frr/0.log" Nov 24 22:49:36 crc kubenswrapper[4801]: I1124 22:49:36.474126 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/reloader/0.log" Nov 24 22:49:36 crc kubenswrapper[4801]: I1124 22:49:36.706893 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-788b7684b7-wx8kf_b7735a1b-89b9-42e8-9f1d-8cae09154876/manager/0.log" Nov 24 22:49:36 crc kubenswrapper[4801]: I1124 22:49:36.805615 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-57ddf9c5d7-jfttq_0475a07b-3963-4a4b-af67-4db99502d8a4/webhook-server/0.log" Nov 24 22:49:37 crc kubenswrapper[4801]: I1124 22:49:37.113180 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t96bl_2618ece2-7600-4e60-add6-a7e8a2152cfe/kube-rbac-proxy/0.log" Nov 24 22:49:37 crc kubenswrapper[4801]: I1124 22:49:37.877584 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t96bl_2618ece2-7600-4e60-add6-a7e8a2152cfe/speaker/0.log" Nov 24 22:49:38 crc kubenswrapper[4801]: I1124 22:49:38.021477 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zzpmw_bf966c99-b331-4a64-a336-3e44898b4068/frr/0.log" Nov 24 22:49:39 crc kubenswrapper[4801]: I1124 22:49:39.059124 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:39 crc kubenswrapper[4801]: I1124 22:49:39.059455 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:39 crc kubenswrapper[4801]: I1124 22:49:39.118672 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:39 crc kubenswrapper[4801]: I1124 22:49:39.898915 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:40 crc kubenswrapper[4801]: I1124 22:49:40.364450 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whn8b"] Nov 24 22:49:41 crc kubenswrapper[4801]: I1124 22:49:41.858922 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-whn8b" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="registry-server" containerID="cri-o://307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208" gracePeriod=2 Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.413434 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.516749 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxf6n\" (UniqueName: \"kubernetes.io/projected/1987af30-5eee-49b7-8583-28706d42b22f-kube-api-access-fxf6n\") pod \"1987af30-5eee-49b7-8583-28706d42b22f\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.517062 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-utilities\") pod \"1987af30-5eee-49b7-8583-28706d42b22f\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.517113 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-catalog-content\") pod \"1987af30-5eee-49b7-8583-28706d42b22f\" (UID: \"1987af30-5eee-49b7-8583-28706d42b22f\") " Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.517818 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-utilities" (OuterVolumeSpecName: "utilities") pod "1987af30-5eee-49b7-8583-28706d42b22f" (UID: "1987af30-5eee-49b7-8583-28706d42b22f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.523551 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1987af30-5eee-49b7-8583-28706d42b22f-kube-api-access-fxf6n" (OuterVolumeSpecName: "kube-api-access-fxf6n") pod "1987af30-5eee-49b7-8583-28706d42b22f" (UID: "1987af30-5eee-49b7-8583-28706d42b22f"). InnerVolumeSpecName "kube-api-access-fxf6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.578891 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1987af30-5eee-49b7-8583-28706d42b22f" (UID: "1987af30-5eee-49b7-8583-28706d42b22f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.619678 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.619802 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxf6n\" (UniqueName: \"kubernetes.io/projected/1987af30-5eee-49b7-8583-28706d42b22f-kube-api-access-fxf6n\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.619861 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1987af30-5eee-49b7-8583-28706d42b22f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.879199 4801 generic.go:334] "Generic (PLEG): container finished" podID="1987af30-5eee-49b7-8583-28706d42b22f" containerID="307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208" exitCode=0 Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.879269 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerDied","Data":"307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208"} Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.879320 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whn8b" event={"ID":"1987af30-5eee-49b7-8583-28706d42b22f","Type":"ContainerDied","Data":"4ef7986ece1241c661152fb907c604606a702ae262ff593a664c7d8b2458e335"} Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.879351 4801 scope.go:117] "RemoveContainer" containerID="307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.879421 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whn8b" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.912072 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whn8b"] Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.928172 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-whn8b"] Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.929905 4801 scope.go:117] "RemoveContainer" containerID="68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d" Nov 24 22:49:42 crc kubenswrapper[4801]: I1124 22:49:42.959200 4801 scope.go:117] "RemoveContainer" containerID="8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63" Nov 24 22:49:43 crc kubenswrapper[4801]: I1124 22:49:43.044288 4801 scope.go:117] "RemoveContainer" containerID="307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208" Nov 24 22:49:43 crc kubenswrapper[4801]: E1124 22:49:43.046011 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208\": container with ID starting with 307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208 not found: ID does not exist" containerID="307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208" Nov 24 22:49:43 crc kubenswrapper[4801]: I1124 22:49:43.046059 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208"} err="failed to get container status \"307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208\": rpc error: code = NotFound desc = could not find container \"307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208\": container with ID starting with 307a1682b96e8ba29cc8f943958d845c1f321fb2cd5a99e346de8f5bce7e5208 not found: ID does not exist" Nov 24 22:49:43 crc kubenswrapper[4801]: I1124 22:49:43.046098 4801 scope.go:117] "RemoveContainer" containerID="68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d" Nov 24 22:49:43 crc kubenswrapper[4801]: E1124 22:49:43.046907 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d\": container with ID starting with 68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d not found: ID does not exist" containerID="68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d" Nov 24 22:49:43 crc kubenswrapper[4801]: I1124 22:49:43.046935 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d"} err="failed to get container status \"68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d\": rpc error: code = NotFound desc = could not find container \"68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d\": container with ID starting with 68b750d8593b928310ea6fe96263850515fad67b53ae90e8287c79b3876f124d not found: ID does not exist" Nov 24 22:49:43 crc kubenswrapper[4801]: I1124 22:49:43.046952 4801 scope.go:117] "RemoveContainer" containerID="8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63" Nov 24 22:49:43 crc kubenswrapper[4801]: E1124 22:49:43.047438 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63\": container with ID starting with 8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63 not found: ID does not exist" containerID="8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63" Nov 24 22:49:43 crc kubenswrapper[4801]: I1124 22:49:43.047464 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63"} err="failed to get container status \"8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63\": rpc error: code = NotFound desc = could not find container \"8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63\": container with ID starting with 8f9813774c204e2f15bcd1e21f703475b40b7f086ec0d820cc303d094c7e9e63 not found: ID does not exist" Nov 24 22:49:44 crc kubenswrapper[4801]: I1124 22:49:44.692460 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1987af30-5eee-49b7-8583-28706d42b22f" path="/var/lib/kubelet/pods/1987af30-5eee-49b7-8583-28706d42b22f/volumes" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.515903 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/util/0.log" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.710421 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/util/0.log" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.736769 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/pull/0.log" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.771486 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/pull/0.log" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.896063 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/util/0.log" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.935729 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/pull/0.log" Nov 24 22:49:52 crc kubenswrapper[4801]: I1124 22:49:52.939899 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8gwlgk_fd5002b4-4786-4560-82d1-57946fbf0b5c/extract/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.092482 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/util/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.278005 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/util/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.286003 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/pull/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.364566 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/pull/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.560802 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/util/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.561715 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/pull/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.594940 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2wc5l_356996ee-8ace-43af-8ef1-4ca116d86ffe/extract/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.769080 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/util/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.986548 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/util/0.log" Nov 24 22:49:53 crc kubenswrapper[4801]: I1124 22:49:53.987237 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/pull/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.018869 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/pull/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.216209 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/pull/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.220623 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/util/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.224985 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210l7pt5_84881a52-e1ca-4125-b8ae-2aceed531e1b/extract/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.440005 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/util/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.590934 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/util/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.598423 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/pull/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.625609 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/pull/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.793589 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/util/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.822600 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/extract/0.log" Nov 24 22:49:54 crc kubenswrapper[4801]: I1124 22:49:54.828876 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fflkzk_80d8329c-034d-457d-9e61-823b13e4e87e/pull/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.003857 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/extract-utilities/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.194032 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/extract-content/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.226342 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/extract-content/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.252900 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/extract-utilities/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.467571 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/extract-content/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.477175 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/extract-utilities/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.710862 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/extract-utilities/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.780237 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x994p_c7e4732c-aa5f-4328-965a-2224085345a6/registry-server/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.934259 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/extract-content/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.943593 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/extract-utilities/0.log" Nov 24 22:49:55 crc kubenswrapper[4801]: I1124 22:49:55.986197 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/extract-content/0.log" Nov 24 22:49:56 crc kubenswrapper[4801]: I1124 22:49:56.168900 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/extract-utilities/0.log" Nov 24 22:49:56 crc kubenswrapper[4801]: I1124 22:49:56.196435 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/extract-content/0.log" Nov 24 22:49:56 crc kubenswrapper[4801]: I1124 22:49:56.405471 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/util/0.log" Nov 24 22:49:56 crc kubenswrapper[4801]: I1124 22:49:56.722680 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/util/0.log" Nov 24 22:49:56 crc kubenswrapper[4801]: I1124 22:49:56.739952 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/pull/0.log" Nov 24 22:49:56 crc kubenswrapper[4801]: I1124 22:49:56.742779 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/pull/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.005804 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/extract/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.028042 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/pull/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.032862 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6hsvbj_d56ac8f4-6c0b-4d87-b191-392103a75b60/util/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.281631 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/extract-utilities/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.288233 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6lp8r_b7aa75b0-00a4-4f5b-89fb-d5ffd0ead9f9/registry-server/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.301209 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d2n4x_0cd03aa3-b315-4fad-904e-616d00db6ce6/marketplace-operator/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.470873 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/extract-content/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.472299 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/extract-content/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.480779 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/extract-utilities/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.647673 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/extract-utilities/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.648276 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/extract-content/0.log" Nov 24 22:49:57 crc kubenswrapper[4801]: I1124 22:49:57.909024 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hlcbx_442941ec-61d1-45b6-9932-da2eb28d6a9e/registry-server/0.log" Nov 24 22:49:59 crc kubenswrapper[4801]: I1124 22:49:59.160612 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/extract-utilities/0.log" Nov 24 22:49:59 crc kubenswrapper[4801]: I1124 22:49:59.336029 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/extract-utilities/0.log" Nov 24 22:49:59 crc kubenswrapper[4801]: I1124 22:49:59.339954 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/extract-content/0.log" Nov 24 22:49:59 crc kubenswrapper[4801]: I1124 22:49:59.346495 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/extract-content/0.log" Nov 24 22:49:59 crc kubenswrapper[4801]: I1124 22:49:59.499176 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/extract-utilities/0.log" Nov 24 22:49:59 crc kubenswrapper[4801]: I1124 22:49:59.502035 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/extract-content/0.log" Nov 24 22:50:00 crc kubenswrapper[4801]: I1124 22:50:00.338915 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wlmpd_52e88ff1-b918-4824-851f-a8b312e78e48/registry-server/0.log" Nov 24 22:50:14 crc kubenswrapper[4801]: I1124 22:50:14.318227 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-45vj8_375369a9-813f-42c3-8834-351eb5a1e296/prometheus-operator/0.log" Nov 24 22:50:15 crc kubenswrapper[4801]: I1124 22:50:15.332244 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-684cfffcb4-pbtd4_a7917bbe-8241-422f-b736-49a933738504/prometheus-operator-admission-webhook/0.log" Nov 24 22:50:15 crc kubenswrapper[4801]: I1124 22:50:15.351879 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-684cfffcb4-4kn7f_22cb726c-4ab9-4abd-8833-064874737125/prometheus-operator-admission-webhook/0.log" Nov 24 22:50:15 crc kubenswrapper[4801]: I1124 22:50:15.538675 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-p49bx_303202ae-884f-4b3e-a58a-77c294c81e7b/operator/0.log" Nov 24 22:50:15 crc kubenswrapper[4801]: I1124 22:50:15.555661 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-xjv9g_7deaf86e-7ea9-45ca-9f31-787c92a15400/observability-ui-dashboards/0.log" Nov 24 22:50:15 crc kubenswrapper[4801]: I1124 22:50:15.737050 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-lwcrt_7cc4d145-d0b4-45a9-b424-e2f09c04c88e/perses-operator/0.log" Nov 24 22:50:30 crc kubenswrapper[4801]: I1124 22:50:30.095136 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c79fb6df8-dkhwm_bc08462b-4c52-4a4c-8e0d-30446d2b9a57/kube-rbac-proxy/0.log" Nov 24 22:50:30 crc kubenswrapper[4801]: I1124 22:50:30.153827 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c79fb6df8-dkhwm_bc08462b-4c52-4a4c-8e0d-30446d2b9a57/manager/0.log" Nov 24 22:50:45 crc kubenswrapper[4801]: E1124 22:50:45.629173 4801 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.83:40152->38.102.83.83:34545: write tcp 38.102.83.83:40152->38.102.83.83:34545: write: connection reset by peer Nov 24 22:51:24 crc kubenswrapper[4801]: I1124 22:51:24.320254 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:51:24 crc kubenswrapper[4801]: I1124 22:51:24.321042 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.282172 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x96h7"] Nov 24 22:51:25 crc kubenswrapper[4801]: E1124 22:51:25.283600 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="extract-utilities" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.283632 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="extract-utilities" Nov 24 22:51:25 crc kubenswrapper[4801]: E1124 22:51:25.283663 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="extract-content" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.283675 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="extract-content" Nov 24 22:51:25 crc kubenswrapper[4801]: E1124 22:51:25.283727 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="registry-server" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.283739 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="registry-server" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.284173 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1987af30-5eee-49b7-8583-28706d42b22f" containerName="registry-server" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.287348 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.295438 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x96h7"] Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.390745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-catalog-content\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.390975 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkt9\" (UniqueName: \"kubernetes.io/projected/40b8ee81-dcbb-432f-963d-856ad7ce00f3-kube-api-access-njkt9\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.391220 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-utilities\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.493492 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-catalog-content\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.493585 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkt9\" (UniqueName: \"kubernetes.io/projected/40b8ee81-dcbb-432f-963d-856ad7ce00f3-kube-api-access-njkt9\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.493782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-utilities\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.494431 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-utilities\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.494687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-catalog-content\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.523557 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkt9\" (UniqueName: \"kubernetes.io/projected/40b8ee81-dcbb-432f-963d-856ad7ce00f3-kube-api-access-njkt9\") pod \"community-operators-x96h7\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:25 crc kubenswrapper[4801]: I1124 22:51:25.630006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:26 crc kubenswrapper[4801]: I1124 22:51:26.338612 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x96h7"] Nov 24 22:51:27 crc kubenswrapper[4801]: I1124 22:51:27.183809 4801 generic.go:334] "Generic (PLEG): container finished" podID="40b8ee81-dcbb-432f-963d-856ad7ce00f3" containerID="b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4" exitCode=0 Nov 24 22:51:27 crc kubenswrapper[4801]: I1124 22:51:27.184213 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerDied","Data":"b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4"} Nov 24 22:51:27 crc kubenswrapper[4801]: I1124 22:51:27.184253 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerStarted","Data":"f120a7a37403dfa43496371472da539eacc32fa44b05dbce75d151ebf5b578b4"} Nov 24 22:51:29 crc kubenswrapper[4801]: I1124 22:51:29.217020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerStarted","Data":"c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd"} Nov 24 22:51:30 crc kubenswrapper[4801]: I1124 22:51:30.239262 4801 generic.go:334] "Generic (PLEG): container finished" podID="40b8ee81-dcbb-432f-963d-856ad7ce00f3" containerID="c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd" exitCode=0 Nov 24 22:51:30 crc kubenswrapper[4801]: I1124 22:51:30.239346 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerDied","Data":"c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd"} Nov 24 22:51:31 crc kubenswrapper[4801]: I1124 22:51:31.258990 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerStarted","Data":"7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643"} Nov 24 22:51:31 crc kubenswrapper[4801]: I1124 22:51:31.289008 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x96h7" podStartSLOduration=2.830574117 podStartE2EDuration="6.288983297s" podCreationTimestamp="2025-11-24 22:51:25 +0000 UTC" firstStartedPulling="2025-11-24 22:51:27.188353498 +0000 UTC m=+6259.270940178" lastFinishedPulling="2025-11-24 22:51:30.646762678 +0000 UTC m=+6262.729349358" observedRunningTime="2025-11-24 22:51:31.275947173 +0000 UTC m=+6263.358533883" watchObservedRunningTime="2025-11-24 22:51:31.288983297 +0000 UTC m=+6263.371569977" Nov 24 22:51:35 crc kubenswrapper[4801]: I1124 22:51:35.630442 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:35 crc kubenswrapper[4801]: I1124 22:51:35.631689 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:35 crc kubenswrapper[4801]: I1124 22:51:35.725537 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:36 crc kubenswrapper[4801]: I1124 22:51:36.423185 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:36 crc kubenswrapper[4801]: I1124 22:51:36.486273 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x96h7"] Nov 24 22:51:38 crc kubenswrapper[4801]: I1124 22:51:38.370234 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x96h7" podUID="40b8ee81-dcbb-432f-963d-856ad7ce00f3" containerName="registry-server" containerID="cri-o://7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643" gracePeriod=2 Nov 24 22:51:38 crc kubenswrapper[4801]: I1124 22:51:38.956092 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.126697 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-utilities\") pod \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.127179 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-catalog-content\") pod \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.127212 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njkt9\" (UniqueName: \"kubernetes.io/projected/40b8ee81-dcbb-432f-963d-856ad7ce00f3-kube-api-access-njkt9\") pod \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\" (UID: \"40b8ee81-dcbb-432f-963d-856ad7ce00f3\") " Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.127869 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-utilities" (OuterVolumeSpecName: "utilities") pod "40b8ee81-dcbb-432f-963d-856ad7ce00f3" (UID: "40b8ee81-dcbb-432f-963d-856ad7ce00f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.133617 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b8ee81-dcbb-432f-963d-856ad7ce00f3-kube-api-access-njkt9" (OuterVolumeSpecName: "kube-api-access-njkt9") pod "40b8ee81-dcbb-432f-963d-856ad7ce00f3" (UID: "40b8ee81-dcbb-432f-963d-856ad7ce00f3"). InnerVolumeSpecName "kube-api-access-njkt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.202070 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40b8ee81-dcbb-432f-963d-856ad7ce00f3" (UID: "40b8ee81-dcbb-432f-963d-856ad7ce00f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.230561 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.230609 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b8ee81-dcbb-432f-963d-856ad7ce00f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.230626 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njkt9\" (UniqueName: \"kubernetes.io/projected/40b8ee81-dcbb-432f-963d-856ad7ce00f3-kube-api-access-njkt9\") on node \"crc\" DevicePath \"\"" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.390286 4801 generic.go:334] "Generic (PLEG): container finished" podID="40b8ee81-dcbb-432f-963d-856ad7ce00f3" containerID="7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643" exitCode=0 Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.390351 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerDied","Data":"7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643"} Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.390408 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x96h7" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.390441 4801 scope.go:117] "RemoveContainer" containerID="7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.390417 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x96h7" event={"ID":"40b8ee81-dcbb-432f-963d-856ad7ce00f3","Type":"ContainerDied","Data":"f120a7a37403dfa43496371472da539eacc32fa44b05dbce75d151ebf5b578b4"} Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.414768 4801 scope.go:117] "RemoveContainer" containerID="c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.457443 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x96h7"] Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.462802 4801 scope.go:117] "RemoveContainer" containerID="b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.475028 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x96h7"] Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.526890 4801 scope.go:117] "RemoveContainer" containerID="7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643" Nov 24 22:51:39 crc kubenswrapper[4801]: E1124 22:51:39.536590 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643\": container with ID starting with 7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643 not found: ID does not exist" containerID="7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.536659 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643"} err="failed to get container status \"7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643\": rpc error: code = NotFound desc = could not find container \"7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643\": container with ID starting with 7d87a1cef8b746158af67a77062ecf7e7ce3f127fd01a54bfe6000aa69207643 not found: ID does not exist" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.536700 4801 scope.go:117] "RemoveContainer" containerID="c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd" Nov 24 22:51:39 crc kubenswrapper[4801]: E1124 22:51:39.537288 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd\": container with ID starting with c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd not found: ID does not exist" containerID="c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.537333 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd"} err="failed to get container status \"c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd\": rpc error: code = NotFound desc = could not find container \"c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd\": container with ID starting with c438069bf761073b0867bfabc02db4954579bf2bd945a88a303276ed031336dd not found: ID does not exist" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.537388 4801 scope.go:117] "RemoveContainer" containerID="b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4" Nov 24 22:51:39 crc kubenswrapper[4801]: E1124 22:51:39.539260 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4\": container with ID starting with b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4 not found: ID does not exist" containerID="b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4" Nov 24 22:51:39 crc kubenswrapper[4801]: I1124 22:51:39.539436 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4"} err="failed to get container status \"b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4\": rpc error: code = NotFound desc = could not find container \"b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4\": container with ID starting with b43e08f4d231ef5c6834ad548259c34e5f51218573c9e937e09e28b4d98b7dd4 not found: ID does not exist" Nov 24 22:51:40 crc kubenswrapper[4801]: I1124 22:51:40.685505 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b8ee81-dcbb-432f-963d-856ad7ce00f3" path="/var/lib/kubelet/pods/40b8ee81-dcbb-432f-963d-856ad7ce00f3/volumes" Nov 24 22:51:54 crc kubenswrapper[4801]: I1124 22:51:54.321085 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:51:54 crc kubenswrapper[4801]: I1124 22:51:54.321729 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:51:55 crc kubenswrapper[4801]: I1124 22:51:55.363400 4801 scope.go:117] "RemoveContainer" containerID="f9137d3eaddf9de9ee03c1afca69b60619d09b18432310c3b3c83204b3fb70df" Nov 24 22:52:24 crc kubenswrapper[4801]: I1124 22:52:24.322220 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mnfsp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:52:24 crc kubenswrapper[4801]: I1124 22:52:24.322805 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:52:24 crc kubenswrapper[4801]: I1124 22:52:24.322889 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" Nov 24 22:52:24 crc kubenswrapper[4801]: I1124 22:52:24.323859 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf"} pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:52:24 crc kubenswrapper[4801]: I1124 22:52:24.323956 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerName="machine-config-daemon" containerID="cri-o://b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" gracePeriod=600 Nov 24 22:52:24 crc kubenswrapper[4801]: E1124 22:52:24.448146 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:52:25 crc kubenswrapper[4801]: I1124 22:52:25.074039 4801 generic.go:334] "Generic (PLEG): container finished" podID="ce526e40-8920-4d1a-adfe-a7149eed9a11" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" exitCode=0 Nov 24 22:52:25 crc kubenswrapper[4801]: I1124 22:52:25.074116 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerDied","Data":"b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf"} Nov 24 22:52:25 crc kubenswrapper[4801]: I1124 22:52:25.074203 4801 scope.go:117] "RemoveContainer" containerID="0317fc26fc78a80a41a408957484b54894b82c79a2597d92216148bda8ae01ac" Nov 24 22:52:25 crc kubenswrapper[4801]: I1124 22:52:25.075264 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:52:25 crc kubenswrapper[4801]: E1124 22:52:25.075912 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:52:28 crc kubenswrapper[4801]: I1124 22:52:28.114189 4801 generic.go:334] "Generic (PLEG): container finished" podID="ff20195d-a4de-4ba9-afb0-0eae3710a84a" containerID="620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8" exitCode=0 Nov 24 22:52:28 crc kubenswrapper[4801]: I1124 22:52:28.114295 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vts7v/must-gather-hdwj6" event={"ID":"ff20195d-a4de-4ba9-afb0-0eae3710a84a","Type":"ContainerDied","Data":"620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8"} Nov 24 22:52:28 crc kubenswrapper[4801]: I1124 22:52:28.115783 4801 scope.go:117] "RemoveContainer" containerID="620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8" Nov 24 22:52:28 crc kubenswrapper[4801]: I1124 22:52:28.789986 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vts7v_must-gather-hdwj6_ff20195d-a4de-4ba9-afb0-0eae3710a84a/gather/0.log" Nov 24 22:52:36 crc kubenswrapper[4801]: I1124 22:52:36.514672 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vts7v/must-gather-hdwj6"] Nov 24 22:52:36 crc kubenswrapper[4801]: I1124 22:52:36.515652 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vts7v/must-gather-hdwj6" podUID="ff20195d-a4de-4ba9-afb0-0eae3710a84a" containerName="copy" containerID="cri-o://375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8" gracePeriod=2 Nov 24 22:52:36 crc kubenswrapper[4801]: I1124 22:52:36.542893 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vts7v/must-gather-hdwj6"] Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.064576 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vts7v_must-gather-hdwj6_ff20195d-a4de-4ba9-afb0-0eae3710a84a/copy/0.log" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.066029 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.227890 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gf8n\" (UniqueName: \"kubernetes.io/projected/ff20195d-a4de-4ba9-afb0-0eae3710a84a-kube-api-access-5gf8n\") pod \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.228193 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff20195d-a4de-4ba9-afb0-0eae3710a84a-must-gather-output\") pod \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\" (UID: \"ff20195d-a4de-4ba9-afb0-0eae3710a84a\") " Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.239207 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vts7v_must-gather-hdwj6_ff20195d-a4de-4ba9-afb0-0eae3710a84a/copy/0.log" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.239973 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff20195d-a4de-4ba9-afb0-0eae3710a84a-kube-api-access-5gf8n" (OuterVolumeSpecName: "kube-api-access-5gf8n") pod "ff20195d-a4de-4ba9-afb0-0eae3710a84a" (UID: "ff20195d-a4de-4ba9-afb0-0eae3710a84a"). InnerVolumeSpecName "kube-api-access-5gf8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.240004 4801 generic.go:334] "Generic (PLEG): container finished" podID="ff20195d-a4de-4ba9-afb0-0eae3710a84a" containerID="375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8" exitCode=143 Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.240061 4801 scope.go:117] "RemoveContainer" containerID="375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.240116 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vts7v/must-gather-hdwj6" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.326298 4801 scope.go:117] "RemoveContainer" containerID="620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.334540 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gf8n\" (UniqueName: \"kubernetes.io/projected/ff20195d-a4de-4ba9-afb0-0eae3710a84a-kube-api-access-5gf8n\") on node \"crc\" DevicePath \"\"" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.384076 4801 scope.go:117] "RemoveContainer" containerID="375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8" Nov 24 22:52:37 crc kubenswrapper[4801]: E1124 22:52:37.384608 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8\": container with ID starting with 375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8 not found: ID does not exist" containerID="375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.384659 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8"} err="failed to get container status \"375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8\": rpc error: code = NotFound desc = could not find container \"375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8\": container with ID starting with 375113a15eb994fb644f81f40b8e387154353271ae4341f4c67e580851f8e2a8 not found: ID does not exist" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.384689 4801 scope.go:117] "RemoveContainer" containerID="620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8" Nov 24 22:52:37 crc kubenswrapper[4801]: E1124 22:52:37.385104 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8\": container with ID starting with 620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8 not found: ID does not exist" containerID="620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.385124 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8"} err="failed to get container status \"620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8\": rpc error: code = NotFound desc = could not find container \"620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8\": container with ID starting with 620ff50df0d8a386dd2b69c09c27ef165db9efde1da60cee8c3cc519d4e8a4a8 not found: ID does not exist" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.455564 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff20195d-a4de-4ba9-afb0-0eae3710a84a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ff20195d-a4de-4ba9-afb0-0eae3710a84a" (UID: "ff20195d-a4de-4ba9-afb0-0eae3710a84a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.539198 4801 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff20195d-a4de-4ba9-afb0-0eae3710a84a-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 22:52:37 crc kubenswrapper[4801]: I1124 22:52:37.665188 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:52:37 crc kubenswrapper[4801]: E1124 22:52:37.665514 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:52:38 crc kubenswrapper[4801]: I1124 22:52:38.682222 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff20195d-a4de-4ba9-afb0-0eae3710a84a" path="/var/lib/kubelet/pods/ff20195d-a4de-4ba9-afb0-0eae3710a84a/volumes" Nov 24 22:52:52 crc kubenswrapper[4801]: I1124 22:52:52.667246 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:52:52 crc kubenswrapper[4801]: E1124 22:52:52.670934 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:52:55 crc kubenswrapper[4801]: I1124 22:52:55.526135 4801 scope.go:117] "RemoveContainer" containerID="ecae779da0b36a86874e3266c70e446ea67ad2a5309fc9e563f29a8480874cb7" Nov 24 22:53:06 crc kubenswrapper[4801]: I1124 22:53:06.664436 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:53:06 crc kubenswrapper[4801]: E1124 22:53:06.665263 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:53:17 crc kubenswrapper[4801]: I1124 22:53:17.665232 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:53:17 crc kubenswrapper[4801]: E1124 22:53:17.666543 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:53:31 crc kubenswrapper[4801]: I1124 22:53:31.665107 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:53:31 crc kubenswrapper[4801]: E1124 22:53:31.666156 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:53:45 crc kubenswrapper[4801]: I1124 22:53:45.664223 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:53:45 crc kubenswrapper[4801]: E1124 22:53:45.665088 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:53:57 crc kubenswrapper[4801]: I1124 22:53:57.664868 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:53:57 crc kubenswrapper[4801]: E1124 22:53:57.665722 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:54:10 crc kubenswrapper[4801]: I1124 22:54:10.667493 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:54:10 crc kubenswrapper[4801]: E1124 22:54:10.668294 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:54:23 crc kubenswrapper[4801]: I1124 22:54:23.664242 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:54:23 crc kubenswrapper[4801]: E1124 22:54:23.664959 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:54:38 crc kubenswrapper[4801]: I1124 22:54:38.674716 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:54:38 crc kubenswrapper[4801]: E1124 22:54:38.675623 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:54:52 crc kubenswrapper[4801]: I1124 22:54:52.664412 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:54:52 crc kubenswrapper[4801]: E1124 22:54:52.665422 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:55:03 crc kubenswrapper[4801]: I1124 22:55:03.664194 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:55:03 crc kubenswrapper[4801]: E1124 22:55:03.666702 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:55:16 crc kubenswrapper[4801]: I1124 22:55:16.665158 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:55:16 crc kubenswrapper[4801]: E1124 22:55:16.666252 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:55:30 crc kubenswrapper[4801]: I1124 22:55:30.665081 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:55:30 crc kubenswrapper[4801]: E1124 22:55:30.666758 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:55:41 crc kubenswrapper[4801]: I1124 22:55:41.665509 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:55:41 crc kubenswrapper[4801]: E1124 22:55:41.666806 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:55:54 crc kubenswrapper[4801]: I1124 22:55:54.664675 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:55:54 crc kubenswrapper[4801]: E1124 22:55:54.666452 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:56:08 crc kubenswrapper[4801]: I1124 22:56:08.679427 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:56:08 crc kubenswrapper[4801]: E1124 22:56:08.680483 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:56:19 crc kubenswrapper[4801]: I1124 22:56:19.666389 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:56:19 crc kubenswrapper[4801]: E1124 22:56:19.667779 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:56:34 crc kubenswrapper[4801]: I1124 22:56:34.664827 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:56:34 crc kubenswrapper[4801]: E1124 22:56:34.666200 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:56:48 crc kubenswrapper[4801]: I1124 22:56:48.678856 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:56:48 crc kubenswrapper[4801]: E1124 22:56:48.680434 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:57:02 crc kubenswrapper[4801]: I1124 22:57:02.664547 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:57:02 crc kubenswrapper[4801]: E1124 22:57:02.666038 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:57:14 crc kubenswrapper[4801]: I1124 22:57:14.665718 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:57:14 crc kubenswrapper[4801]: E1124 22:57:14.667083 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mnfsp_openshift-machine-config-operator(ce526e40-8920-4d1a-adfe-a7149eed9a11)\"" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" podUID="ce526e40-8920-4d1a-adfe-a7149eed9a11" Nov 24 22:57:29 crc kubenswrapper[4801]: I1124 22:57:29.666803 4801 scope.go:117] "RemoveContainer" containerID="b9176264caf83f62b7bf54bc4667259812f4f416f640042667b9a35fc8bd45cf" Nov 24 22:57:30 crc kubenswrapper[4801]: I1124 22:57:30.751681 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mnfsp" event={"ID":"ce526e40-8920-4d1a-adfe-a7149eed9a11","Type":"ContainerStarted","Data":"ca33d421ba9f2d3f1557087e9a320aa6daaa48986a4a27e3097c7de283148ae9"}